var/home/core/zuul-output/0000755000175000017500000000000015157456270014541 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015157471377015512 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000374424515157471215020301 0ustar corecorerikubelet.log_o[;r)Br'o -n(!9t%Cs7}g/غIs,r.k9Gf ?Eڤ펯_ˎ6_o#oVݏKf핷ox[o8W5ֆ!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\_.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €' S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs85 ;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;at 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6* ہ)Za@Ι}YJz{ɛr|hxY/O$Zøu32EʉD'MS1}t i:Y`cФIX0$AtĘ5dw9}ŒEanvVZ?B巻?qr7@sON_}릶ytoy͟מseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tU? Mq 70eP/d bP6k:Rǜ%V1Ȁ Z(Q:IZaP,MI6o ޞ22ݡjR:g?m@ڤB^dh NS߿c9e#C _-XѪ;Ʃ2tStΆ,~Lp`-;uIBqBVlU_~F_+ERz#{)@o\!@q['&&$"THl#d0 %L+`8zOҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7чd)wB5v-)s蓍\>S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2aޙ-did˥]5]5᪩QJlyIPEQ`1yrOP! I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YL`d[9ɃO>z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQοs d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7!2~.볿j HmE]j `7ruuŨԀ![Z !iHlf[7Ua6BEZEkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nGppH? 8>X+m7_Z`V j[ s3nϏT=1:T ?<= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@埝n|Vo|6 8~J[,o%l%!%tyNL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄b&VY+yn~F8I !6WB3C%X)ybLFB%X2V6vw8uUF+X|YukX?xVO\ڏv8؎Z{TcR@MSRδ~+1æ|mq՗5/$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rxeB4s=Oi$LXIG zPzMD{]4ü Q̦ Q^Ղu ;` .Тr yFBQ#C`Jyn,m93B%Z~O/_BKCQϰԨ\uRT{/;;^u'|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧ ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+7o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__0pw=͠qj@o5iX0v\fk= ;H J/,t%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D~5| 01 S?tq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4k7/KwΦθW'?~>x0_>9Hhs%y{#iUI[Gzďx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKC{Hv#GL`,Oȃ1F\$' )䉳yg=#6c+#  =J`xV,)ޖ,3~JPͪm|$oV1yU<̐t6 T m^ [IgINJ\Оf*Z"I)+>n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳oƄ%VDSWn 0,qh! E-Z%ܹpU:&&fX+EǬ.ťqpNZܗÅxjsD|[,_4EqgMƒK6f/FXJRF>i XʽAQGwG%mgo 恤hˍJ_SgskwI\t`ﶘ080ƱQŀllKX@116fqo>NrU Ѣ9*|ãeeH7.z!<7zG4p9tV|̢T`˖E ;;,tTaIUle*$!>*mBA2,gJIn_kSz)JC]?X(OPJS3.}clݨ{e!MB,cB߮4af祋,1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սE^dˋf6Y bQEz}eҏnr_ ^O^W zw~Ȳ=sXअy{E|\/"?|nKfֱn !¶6!=?8oZY|Uv`Ƌ-vo|J:9[_v~\:衡O`c IcjlX):_ EeV a"҅4jB7Sۧ2t=t).aQcb^CZ-uvpr!(dvۑ^'5|XOnI-D!PltsHDwQ$zzBvU0h} -_.7޴kӔn,?WTm>C9O i6HNe"j.S֔(*Cj!);Sak*ep~C߶/*~qzJJaQ#-n`~_Y阡ǝ/RS>r,CJYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:WztK@6MR:Y5ΟUh "`"a ߒ"G̾H nCk(O rS/wvҍu^+Qm0c:QZ].1lcdæ_DQP/2 re%_bv%"s#PCoT/*,:[4b=]N!rV%¢EN$iԱ)e\rxac8{ =CNc\E)7$%LO./!Z&p:ˏ!_Lb a|D>{N{Vt:S4q:i Ǟ/"8+MI}O+D7 P=x@`^_d0Y@z BO2k%x90ݙ^Oe ]nH6ͦ+.4=@Nل̀\쀜*/]=Wd<)L#AV<eq1.bKʂnR]>̤+ kj· dM[aVۿTQo,?9mw*n\7[cpi,}%juWʝ7U6m%к{QWz5W߬V}Z۫BnUn.Uqsy.?W8gqOg-?[~,;n9 |q|w.dަ'/q/E :Xd,RLW"Qd9JogT\1f3@KuJ'@B x,kA k ^d kYj5Ah1T9!(*t 0'b@񲱥-kc6V'Ó5huՂUMpa.% qZBh]Q; Gd:|ؐ3$ "6meO>Y?HELkY<ZP>8YAC| w#Dr. "h l`2@K$`#XtJ^ zDpC65]K[r0Z;`^ʁ-G$\~&Q;e[Od  ^g0uE~ۊ$q9`]T#CJ1Ǐ9?M8]o2seXVt=e!`JU#y8@*kI0{G\ 2v[{!fRБBmLaCfKywdgb񾍠z}(.>LC,HI$.ObKjoJdO UDp*cj|>z G` |]}4:nq!`{ qBPu(DihU9P!`NHɩ݉S-^pşCx$BBRJ@ѥuȑz.#&UݠmF̤@U' M6MY0/r: *s5x{gsə$ԙy=Ejl1#XX۾;R;+[M&XieIi5lݍ3)`xvcZRݩJ$]>:YF2cU(7B~;Wi+f{v-_@q.+?D?C~>_)iqT 8;DQs@4¤>nl"jec-R9~ {^'##AAwLѲ?VdJ.ԫiE׬ȱ<~^SP3̛X7t_"*l§,̀+ å} .[c&SX( Qz~j@-E} m"8_hץ|r]^#}8@*AM.SEh:Yn\~3:&58*: 'S!効<9ViCbw!s1CtӒNc_:SE Mn޷wQb0M^IFic$"AhQ|![NIK q~,Jc%+8h&4II36V 8Vbv"wŏݙmn&O-^׵O;KaRˠ] ?Cֲa9H] lX9^vCο -vd+OU')X<seW);W= 2.Kfs%(DA M.1%]vato Px"3Pc 7' PW*3GX liWv-6W&]2ǂv JpU1%8Gk@5^N&*ߗEЍ0U#X]Ħ@x؂K0c@nyUMؒtqg%:W\mQ!%8j0dUo2=rh>*YȴU3Q,̸*E4ߧ%&U0SUuvͦY;c؎]XE S/8 7: Q hӈiJeV"&+ED>M-@7aV]XYK}qc?H_?CU"꫒?`a;N6ux٘n<Yllmb rY״͆jqTLC`\?c%0vs!:W_9iç$><+1nlz1H[Iv%/clh]7I^=zKٕO+>YM-~EaS飋Cx)6h?āUEJRs?]\MkǴ`+Cc07-+q"8vz;j]ln2>qqOaz.TMk9gk.T4G! ^O7E"`W~_,@2\sfa3'9`hn҆czjuHflW11/$f*0@б zrtoZ[|osk~[hW+/=2!8/G:T60^1Yg~?lJ21EzL{vrQݷ㺾|%weY~(nExuQp%'3JQK-͗}I+ϟYSUkI|0]Sju&[V zS{X6"X#I/vrʑ=}J4fG7QIc(<ǩJi +miEJl ``'1F3 hMk^![[ER|$$:zX]]GwU1XA vrƓ%dbQ4Qep-%2TIE զHe1T%pwU rϗCՅG$0T(D #JqTДc i UQJr"`RbTꢬ6ͭì=K.Q!V$2TV D$ZGe0TXu<`5iA U=b CU{J(y̫0Lyuă;ͱ@.߹E׿J1ħ`>IL U>*kcqU&8=GF瓅JRi+U@yw(O-G{廆kp>5U3/TB3.TBu/ts1^[o<*c-(l?` @xDƑsi_y|deo!<"kҟ֛7#+]tUM~H~]y aXb||,?f+Kr-Rd2B["%<.o3^W,Ga5BwtWO0)8?1/ wX\ϢG%"euΫ~2/-3Pl{ ygY(U2 ۶x8n Ly"0TM a&7vZ3dg(p.iz2Q?'s겈3/JiyU<YibCa`ZXcl(k֔7J" "P9""x%WQ9yPbTγ:tQ5.(c?-*[x4b4|=A旓yJ,|KbW#] @dC~'1ĥ|^9ϲc_GO)ApK"z!Ft,)D2[wyBn=sww਍3xȣ_Gu;,m#*-ټ*?8޵W\SgTuU#P7(G'i <^{LyǍv[)OAhs1WXeU[GqVokO+_e89/FsTme(]$ڎ{܋EgY""x|1B쮪hU!xrݐSPsgyE!N븊Du% aEs\xbIbTt<8m;m@ܻk1MWɿ=_IY QWUwѩTXa'Aa:yknw MEojlKyeJ¢; <8Ic% exOH@݁xwrGV}xsS!r\oAY$g`{7)˞rxr5>8 8~vjۭy.#D :o [&AwGu%־ xuNb;C%MK7㶩*v!;.3͝?>I/>);m3{0z7'M$Hꊧu%w6` iEuDG>$̤o#>KE~nMYS`i|0XCQdTQx7 ;y^N ;Ǯ{1Mu݁cۺǐ.!W!Ǔ$ JLߍuFzu;OY]` FEHk$QS{_du>My^3xz@(9/DIVM74^Bk&!rv߅NEU4[aSȰdCSaIn>SυOLl|st}KxsTT5`91}+oyDX\ Z0&;QWω㜉NI)M"h3$ ;0`x!8jyQWd1m7v/4M/ TaOoB#8qq /csBKc`_a_fTжtه{ L‰"fj(M]'2&ɝߜ\Ct~\$Ep4rn*A.$x`i%6S#emho>ys!4*,nENZg=<9#:ZJg%?>=RX;=5wCJ5mrse!peH@j^UUs «g3XPʜctͭ?o<$zTc+.#f/b&˃Y=tM4GdxεWeKfZjrHaLX e:go_"h@4Z#7gkkIdUjԻv`c%TmEc&.R<-wrfV};K`ֆ11R3{L}Rlu,tQuËuģc lhП<͕jvt8D- -4Uj|b$R|$"gVl}XWVk<+"ueqM'j80YxW}y5#Xv k jdT EƷu$eiDw=]{ MJYf0> sGR,4ٝve u[0LY{W@֕ 㺜ofI#9Qa7 ,e&TemI-l`Ջ-(Wgݒrs4Cz)Lt[ m p`,S9uwO%#j6{oQ7er[d{1,-X7E㶅Oڃ,{F 5*$8y:f#"O;c=EAc`;/;\y&Cu7@!8?47m3Ƣݔ?knw'6}xs/!+}q[B$$ (̣Al2^3dINc[^Riء2QnMC|CRv;,{,},fFS{\/sq6΋(@FxL0Q#bOPJg ]Cb|| 2$hFWQ|GC[2e,H(@&yW]z$4L&bF:ZA\W"hV ]'X,ɽ6-ت*0Fʦ lnCt7* .!~5%dzS/ IT$<_0@EKڇ@]y4ʪ/>}T8E{mRZ:z`]&٭@ ]>@(QZ݇wG<9ӳGꯏgGo~:np <-,' w;/T ̰,C0l.u](Cu4`H07$հ#shOC.3٣vU,s]K>X 9("iQX|DW)IoH%X/\GκKT fwH8^hS ]R?}<8p?kF,L!IF` K2I&$ɯD#0~t$|5SkBFjH2Pg |M, 2*  kVSكU,X}h?jXLP$Lxe;j+)~KVYrJOQ&&x]pkx$j.T-OGiDDq1iȱ:2C'7U,׌ (JA@Uϼ7 ǃ%9bn03<;ౕiژueطfG^-% "ev9;ZbE9 2C4$q4s6U-4ٺge8O`YܟCqlQKze~| AFc2go9{\hEe<^ ؅T`1fPAfh0NPUZ }Bnr-63#}:~ ;jtHDQ/^svd{)wE8Hu9jc8Q{ʂ}|dQJ@ i/!td‡f!Ƃ5_Hw5Wȯ&{ExyR2ɦc0F%EyC'iXm U!n';qVA+N,0V߇ 8ۈ *AS[k:[9V k t[Q?͝uLM>}:+@"T 8"U^# }v9]&@&d}8Nbiǯk~19B;8%] ŸS>p4[[ܧtwl38uiQ?ӝ#/o'Qޫ{}p8P<_~}y`@r 2ϋS 0ֳKރwj ,̧,c1w3ӡHiLΥ @Qx e7a'{6I;< @F2Aɹ݋8ȳ  R:EZ|jfV 'l_l>zѪP{2p˅0Xaݭ-i1*l0F}وٖIwG'L&V!Y#F@ WlnX[\Ipa> ( "(拐G*/wx1]7,߅ޥ{-*6pIUP–컡g4^{ շƷ4L|8L0[ޞ_CLfÐHC1|ti?l M@c_Jg{,t0vq?|g9%`'ͮ/[-% Ɩw6o _t2`3~Y,qoӣ.3w)+9>MclBP /*MIW."᷺PJ(ہP;i= e6l1e{ *{⨲}43O۽wʟF*ߓR*ڻj?P{OBuV uv ٝPi:{@JO#ݓPwBUBv'{ޞz;*V ;*v'Tޅ`<ʦi񡬟?97HʔM(8VK9GkEAyxz\9G\8cb2^鱼fy6^\q fETB{zE??8'.LyZd9X&haI-}ԞPnւf!ǯev,RXL2ԥNgpkeJa=B)Ės202`:I~po<[8WV|$I*[e,N1 OT\oycVZ^eao+Aj9" |H}8Ǟ.^;|u]s`ZQW]agtX&%-ݭW:mz3E8TAz(5̟Zt1w/ij)@*E4a e${ii.@w62˫|qo41<7RiY~phk J BLv #c6CwEp1ˁ:ɦgip)Rp>=Ԉ.6>V<{:AsX3(O/7XʵZ+,ex~pZ.UVk>8l}P*m>OqөvVGʾ\(,Zu{Ƭ^zTnWQ^:}/y%O\ZQA 販@ YWuF+YWۨ[.A,뜖*Mc?zEG0SM*WJZ$6aj*_bTm^߬&/n{ 3`Z!c`_JƐ`t!r]'n#Hi:*-2 sxee\ffp$DE9mfma-rH&O$ѕ&F|61$tҌ[A\G8 gk+_Co4&sM̳kDrsN`ddChiP*G\*^E 9;OHP$_"k5.*WJ礩DS=d4+G;k\(~>-T V}@gDP56_-Z zT;Ѝ !+w#y'P *vmD?cLgKgLr^4g(_'+j-,M9sA;fFuN뷹-C}Ե,hFKBkXLVqTpEW}xydS]i6TY/=@Wh0,:`*a^>BXcLBZΎ{'L:khr+'|U R.bv9Z&p8,xN$JՐ,#ƪr v)30jM21ü|LU* Ȭ+%&4z#%lϚJGha`^׫ kK- 1G X$H6xtO(I_H6`8G72ƥhX>:iTl9˗]~z,u_Nm2$1B{3զG &Q1ّF"8~6?Ȩl,\x8ȏ"O$ ~5J) hZY'Z{J_ :6H"_SEIl-vM@qzש_,yu z/W;c?~ ~{}W:̏/?³]u>l_:.%ۛW?\8Bx2]iqe?np.' |LO|*+TMo_%h3Ϝ|W˫=_hbW2h7-~=cݻ\`aHg~uu75toٞ/pFoFz"<$ݦq2;3yاC^?63qNxL7u^ut֓:颻IڍXn8~=n"we{ӞS3gu:*1<7yD~pwfwg},=}vRIGaʳ!Yo ba ^QϠ-6nq W ݗ^8 vA|3>=`wvpAxi0 :RUksB[{k0bXm`$?)S6)ZaZʺtW#F -"Qn6Ff2x%pKa{+)ʹCĘrP۱g0r!}S%٭u-,z _gc`O b$f nu1R 8o\Q rFڐ!gHpQݭki=2 G  {{Lk[47a%cD$%IcE1Q%*ѳ$w 0#1f 6.%x]YvAC([= .tpˀ;tI,/XZ8/$ЧX2+ .HЁ BueU!_| vw"5Wrְ̘@}~FT1y@/&u{PAsM7d ,`}xKHYz krbÈaO3Ei=(U%rdrvD:16S֍'%ho!d] ]{+T.#uh(VyLGaY66 e#\Gs;^m|o缺!9g{W:o soj'1k3ZU=[PmNJF0\:U `Hb.^wPBE弎5zl$XwWY3ORd- !Ę|}ޓh:4Q M8kiaJX1(-ƈymq.i?eLWM.d(YLk5m,o_:$6=v|k X8uINGXI2+`ÓF:4Il755| E#(<6KNt{ق5Й>-!B$jTYTsD΄j9b aps`w3VCJx=Sxhy+) D:pb$N5<6KIYIB8T; |1d5FI" \TGT[ˍ5sgNfI_3vBx.}%}z-&j68`F&e>E 7Uf3F {S `@sa`SM*&/zUN5(դ+%$GIdmk$J~QCLBlZF0lO$8ѓe$":,mAeߖ4"^gcYl4ɾ )~!UF%Nr7GqYfeAc[6/Ge rc."ָ\80>rpRuՊ>GY׌P#YDȠ.jlf1#f3B '<}crJdPfڰ<-a#ܹc5Z[5PyxtiW=8/sn D54痖Hpܝ>t>L)l-/^vZf%8#[)ruFq#4"1\Vk8hnv K:x&J/hT g gW!S$&H3V14L0,KTQRI[ߖtrWG=Vwn(OOe^0:'q1bDO]3#j.4(lٶ"pp@*VV&Ep?8͒U!J !CR`$~K4Bx ~BX>xEf)\DpRKz.. Hbr%X9b}b*R2ji>Sz)h3J8IX.0m!M"ZMH Ĉ_F[[*oRG v0PF qQiᙅNƕ Eksu5F =v^Ta`2tEH+ptD856l *BfO e,)0Br(5R«fi#oKZ~V7xlH:r %jV3|lnlРDf1Np6/$KTT;Ytpm7/$8N1~[_4zp#jˆ7H w̢>w'YApf53`0(yQS*ܸޱ,c^5x}FQ*#Pgb>ڨ+-L";+G҇VRvwk1hcb׼w:\Lݹ5 mĈܕ(`s+G=*Vށ7%ړ\K[iDNs MLBbU, rm=R!%-0NN. tdf8/k.6a6V$"Ick#)BFXK){t_o>FV@lirqIHKPmIܻl3urrC>|gIyxLqyV}ZvEk *kw)f7LNqE6+(PBO +Y%r5O~[ yG .ß=Y|yZylWqWzZёSf8tI̪, QjVKz\ZDv+q&~)S\X.!K8ǻf?Ѵs6啩Zd8^ˡբ9OmlB/L45d"xz?YjڈZvtP) =ֿZgk f 'nu,nÙ7<BJFl o uDX#EZ1ˋj;"j;&K޸)sCm#J,3.g3N#f)+1,815T˧;鞜X0v|(50;V"N*Vn&~"FҀ./f`F=<5BddK۷%n$[*ēq .8̷N=1dt1lV ^9F(J}nlwtlH$ ^2z@zQ*\LlC"U&jin3 F҉&ݒDs_'}-`]l]R_0~}LKr. 3OV$8.0;6. 4/|&~c쇼bi O"`\sNNm72oLǞ[U1` K-͓X0xr3 m!-A`23CZ GV㦮٢;SSW[k N=.@QaCx|1E`fk0|Mm8͊x&84v|@$ <(A82k8(R=(FR[f3,"Ӥ\;7pXI5|6ǽ'ş+t<\Va15&L>>^ܗ.4ΔWз[Rk $xĞo\CY?B)׏{!1utYڔaJv!}ts![N`-e{mJbo|,FXy S yݵIRݺ>kl]72O]Fuy [ݻN9pY!Z&pRp$9=bɡ*(tgQ&?w,vĽҿ>9.1n.[$ JۋYt&mhd8xa2 +ide6%o]tޞ ]J;Ѳ Mkm_nǼEI 0^+JIA=3-Jv,ɔd16`زDΙ3ZKrH ` nUH7䔺|vp cS/G|HNv=r oT'@P2B9Ȑ+.tC?Lx+ i.vѷR.v]$ukm9w˫]+Y /8!!fPM |693F@(DW\+t8|@QOԏLX ӃTc]}k^Wy|0yϤ=y3{j+Mf*q=m=%Nm|5R{v,}kw C}"4` 0T'z Q0ۗ?#tW*ʖֱ_4ZΠ}!z僼rۼaf"`Wou#ƓS\lŖS+UTz8q *x87hy$b< w9{ݟ"H4VU!qd:5F"',BnD Xi׉>E:Gk̵ ҹ3Fax0x8k$~<>@ Tc/hLIra>#{'˟Vܛⲛ"\`Oaitӳi7ff);18+r'Uвq>ȆǙ W^F+P7텛zy1I*oYE7 h\L&{| мG@hʮ`pQ5|\h8 G۱/ӱ&I2#XaGփধw +&Vy1! ڏfb/nv%n=Z%X բ}_XjW S%k x |II6#ym'.,Rvb"fZJTKۂ1Ú͔]Xy2[h0Iϛ'O8NƗ*;]>Ȁ.>ű!s8`T4Zd@XfC_L͐t_9Mh9}%:H Dt陇}Pu? KTwa ;'FDp93uR#aIIy1oVuQ\J[l*jE&aj&i3Eԇeo+Qk2oڼ9=:%eFh 2V0G40(\q7)#p Rבs%r9(AZZ0QwNS@+8WY[(=})IΜ-Q6H4Sعzq>mP(/80dMy􍯤ȬMvԓ\t?q/O+;.(ί-4#J""_A[8_cK<}8BS]%vխ}_!r5vq. _V -kV6qBP2UE"毈6Tdj`R燡 +i0AX ᾢ8Uΰ&f214YF$J=$0pJ G;IS^i, DZ &D5/yK_ߌ\v˳F*Rtӏ*WD{kB$K},c$E2D/Ϯp^U{HM),-Ru[X3DDu q+%?v2#[>ȳIl{\>*(xۯ17޿xn ߻p]NScv8'_~xE$2-2@F5&1tNo?z>d= S'J ej;\耼\ĩN _VuB8P|0ҳ!c{?ni1> `@/h8kh_=TgTPlF1Y%Xbn+Il[02) 0fC~jCuD`-N2s,3[sW~Ɲe#;Nh\Yj@nrelǍvH^yFrisJMsJud/vj'8k _0mܡ`*]X;gtDa[>1?nIָl[10;&~Л #Ij&!aMTHfySzYSxqg";$ڡ[GY٠5 b3f-DRE5q!|Xe`Vd"wqt \} cc` 娿2;hs̯Qnؖ)pP4+}_@w.מ3pk%㈤lKy|ـܖ o\QQ= ڰ>z'ʖ=9 %X&QɆen]{|T#4ݴ~ B|Y(ڛ O@3ae~z_".xHxgbzZm+؟J%pХ<q?kް=|R«bO(>ZG؟KKv?"i"~lR ,>s"R65zo` ph9bL=֖ bj%xԗR>=~.Jb@]89_{N.BjaYrT{(MJΦ{x{Lt=2\ A!hyun&Y>p'7Jb`). /8g2I $Q-˪꒡A F\d *1 ClN+ Cgleh:Q#h:%#VyВl.1X'B(`Yr! 3L*)P!I9R2XPB OʒtFtKX+KEaAڔ#p<ܜRE wFacAVKY:W (\,M'j$M'dh\flVj+o-|G7 ooRn&r)kCapB+ݬп&y?뙢X+/ZR~\c8~ߺp2cod[}|ƗY(0pѦc)| I^0`ˊeh1Y50C%e-`}=ˍp:9Y 2>Xc1v3)lLYnğ\(Żd~s1] {&b/x{sDjܿ@%kr+xLKHLRS# rzfvƜ'ܢWտ U#w57 E5@ۛ዇] &A^L ~ zKUA^Z3ȫkBAUgVG[s:biy\ת{% :r`Gw/~jU,ҢA'egW)eQ3|rj&>iZ|?8$7{h.KHf^#4 $n%4eNi9Dw 9AxSJ@E *S9gSj(ōJ ygª^e |M7oRie@tnb3ICF|B y?c=Y|fqRƭO\ O͔1B:xw5u%!|e7$7QۯϒTf烳6+^̏O|Wލ/@A(שy{|i&݃)(X/}8"(Oφpw |i 8Dp&B8#VC7|D(S=l`g⻋6ϋozܽc8E8vchm{GԬ5ן |^̂KjډZޅĘ`5g Og'+rU> Oזq/_dzӓopyI|[ 4 =S[ ށ"!9yN&= y/h&8A 0$}m1q(%FPu|?-SIq0dkcF|+zk6s&a^ZP1z#bijcX@`RD!KXX4bP|fR<S qbM &nsvhޞ4^ajx4)Hz'EQ1!.5:MS_@L(e,Gā46$$ ߩ); ZeꅤE^Hcl@Z$ i{ v4ZH4XheF;j;#^ m \D#ToRU/֯1`'Qk'@QY)؞4wZjB0RZi2LloUO]k[I}:>ubz)#!S)#aZx=rwZ}^Zw|< #t7-7מC)r~ B3ݮZTQciR1M3H3g $LF8F9mPo$uւ'2+!8 I#CA) $G2IkT-OfJg#Ҕ8NC-`ŸV)amΥ8!|e(-SFrR !h њ*N |1q%HuҌ8Lc"F2 %RP[e$ ͉0˼A8uG2ۧ}TD%q#"C~ ?74$Ekm_\sA Wı$S,6, Cr晵5s%Xv}#\`+o5[Z]ᦂ+cK FE0R2S*غ.ŭSXBm05~-/c+1xAjc o * C` 8_:fM'PQbVRs[X%#+kuP1.:K딲vhR:N[ :]h7ԵVOtLAbW\qqJS[)`rFS ;Bp9c "ZI5 u<#QF+B{\][CC\+TAA> @XVB\+[x2TV{j,rVvZ2o Rrx: JBp\t5W%Uth%n:d  Vv$(iz<a=[:C诤^G)Jij+_|ç1"d¶ k( c.l;څ`oj: ";L)\'`:谻eEVPrPu]r%+cB#_%20-T`$Q[f܅8w%RB`r/JZC Ti%*U.`X܇24v b]ـEZvݭml0pm?cUwvCu#WWC%u!BN ٗk- |FB%"JE ԣf^K׋w\_H2UoaEq48IA %"=g8ɳ$߯g TghՓgl_{A[`CD7D0R:Ъ˼Ofκ,w)UwBLWB eФĹli~ ijmqմB*N*8) f(eᵱ fKJjPuob o*/TEZ YY c+Q師o>[gf)n49puC,HD C{+ǭhetyA4XzI*u" 8̫VuP@KTݼ/-δQ`-uCS[5 ]V mrɲ ̺qׅ4)qwP mé,g^3`R1ץ1w~Dz5V<~RLrmyىٌJz`~Glhf: .CH3O/.ί_\r'\!q>Ly6͔O_8EeFlO^4Y÷ɫ_L8e2> &i:ZPzJcsIƏ8:{o.S7A?9][C$/IDkR> +ym d\φ$:K)mbRk%CMbO*GP_~30~ptrq|ǟ)8e!-6.]b<8xum`o! ء Pf{`ۋZ^ZR1!Fiq$KC+Ph-rڈH'3 I0_ӐXӐװLC#̶o2; rk׀TvSG%OzuwQrBY[ﵩ3EAA4Qg| ~zb?'G;?/|uc_;sz'eQqzM7 f=DFC5su/o4O5B {4{:_kƓXՇSltr3R 9S- ?  Co!/N14XxfɣF&#ߚ(#4Īyu8;mb3 cW'7:NZVfJMx KY[UMpJ.z@u}wޖRvNNwsnMy~' ~_n0R>npj}z"|Z7&gΨ"^'qfW2kaۢ˜+r@-?IHnԴ2dU"ClÉ$iAd` #iA3+z2%It}%c$LQhpR@t}.:#[)9znp5 ,[ (0p<("U< vGQk.AK4Eלl> ;ڂ R[Z0c]h%˽ɧͽQrK\Ԣuݯg;DbAp=ŭ߯gIbL%v)>TZZ{e$E$p)]i5+/킂l뭞inhzcжwD̷m5;!m]l3.uyoU"щ0aP90*SmK1K(@-=62{A"`=rgobǫ+ 9V~\' >yQiwq-j(KW2q2ʡT`[qCO<$mRIhIb70/_-j\+oy&ͫH6NT}(>9H4S-f>`cxyMFCPMwO>$mJ,V UQgT`f< uV5P]Ǚgeж/te1%2ȴz"Z]GΊ-z]y-iD\OFAb``Shp{;DAkŵtYvI$g"[鹃zJƔ2z#|ITNLN:UVI[""@  2:r{cuQ}وÝ}MK좾IȡZKuݭ#nHw/nzΡ7Wi~[89Qkdh$ m،dVcZ""~DWI8V.;$ken6\vDha.;[d[%Uvk47ff(kp]I01_Ilr_ຼ/*EƉ~OMu"4(Hha njm)fy5&aX<};Z-x6FhpmwR-Vc5ʈݻ鴀5|KĶa/}@#&$NfKh𺉻b'(_^GT{ߎ4~fLYZmL;4-y@98;'h8w?7{h@XF.[mmg϶†ʯIvEt(6c't4r6 /"8&ZªbPu7+IX;~Z0ڝw5'4"G'D:,qYvw]5ZÄCF+pAvh%hR #(91>a`Loyi(j%/N\$\fniE2{tN⒣֑WܒKb EbҠc~f VY\k ]p ڔ@h%j벨K퍴^V0EelÉ,iڪe-فZv~E=4C+2VԊ:̊{N=D:7P ٕ\knt|}mit|HaJz:$qt|=CSVDZN$ݐh}Db4s>8!IGs"25F+mάذ\ZAntpDD4 l#\Y6ޕ?=~ NQ'f@$HJnhI3g+<X UQn̪p7 RdqYȒYyȕHpfhu8w.m$*[Un`0gO[Id8A{Nz즺()X @x]_WUy_j7]Kq3AXuJ)-%.LȻNȻ!N<-<< yPw +&i8K>դej&1vy.֟=֗m]<] }q7Pת?U}v_.W//kMKg=}X11Xi/haelώɧC{` ?oCyc,?<>bubu Oe/K|1uVOGUuY)>gYw| _o\RBSRqn8xLݎ;7z;dv\|LW'k;H`bqߟ|Qӵz/Bg+S)2>e] um l3}8vӮw,U=ȧ'X%wx]sqy}r(wn}e<3}~Ľ_[Zvvrܞ漺{O_+qm }@yu΅EVLW4gŷZHU/n8(AU@ʑ %e6='ov)m&zc"=_ϒH܂MmXlиkj!﫦u?Ǧx Ź_U-e ;}ca0 , i_SBGk(WE/cI:zji+}#C55xsZ!N5:WHi $/H)#P U_溿rpW=ܤcwu՜xWDMϢ-W-ԀmZue^WZ9oɇwۮ:;[ ۮbUy\YtΚGjH> A$hPY)~hX4چE·ᅭkn)&H?Qklc]@X[q#q=Ffv4x(т lNu"0lˁgnm:aꛭ#] ƉH|8"Dn[#ru(#if,>1!ZЧ37XI#Ĕl.Җ¡+}0,ىs#gVPL,YeU~G+8G#xm$!=l 8s㡺[eχw8b&H LgKa*OR]i\J8}inBV' `$BpؠkS@H"פ|ȅ˺VuK+nz²| 4! >)6 {co=٦nmiHpD #$ES6AQgK>|8zc4䁒7>d9HTF[ (I0 KĀR[bIW[d^LWF~3h-ǓSX2m$UgzWX~8֛,pXK8۾)){}TzuW Q%_ӸRWU/k>@U-b+9$Mɇr\9|b࠮.I4;lqFN8Ra3 ^U/F䴔Ƥld/j/;~^7joIC4ėi#޸|7P|y>sBt# ە$Ɇ/϶ś G[w/w]s{k"<F@%)vFYJxLkۮmo̮vc ufv[I)V'PAwe^ջۛުf cJ'H>Δ[4r.^~]gFp:=ً! Y'^ 4@C@SVh)y qF3›>H& ZF")H PhЮ{5y+#`|H95H6.XDUCR@u_X^!ʨߢQ ™61_-f=2oo;oAo;!`$ވ3mx7s͡yCY⦰E(@HYɘ` &/:{l.BPeeo+jo;B|̚Å !~# R-Xqmڲ\{x:ځQVR H F8S5tj[6~y[E~{ ubR2m$x d !]]׮. !]wTXfT&)6n@6$ms`$T1:WB񐩡rUi )dڹAP!x6/CNiI0 Du`]۟sv !8AtlR``$P` 8]Ζ^v{z:d"kZzkιNN1ܝjk_!9ѐp*9C\RGN0 c2gd;KG8#rFvNA1Џ@oɄ4gdH> gb (v,|횿wͥ> +R&x1z')6 L'7U5tm7%M~"F9wRTQx6d$#+ȚZ_}#jHŴl@QX(5P4DY.(voJ;(bcqS0p(@tJz [†}68oYG;*qI"4շӭ 6ؖOUͺRCh/\`N D4Oĝ,O62d'iEE.(dnjP_X!$#stBh]<"4o&!oXШd$!pBvTcS!  'eH6@a'x}Ҵ}q@I琰A3X ɉKp|$V̩J1V8lL22S#HXS4h$ /O}}qGdI bQ) ,F >_BlKb4jt H:H>!:]Is["NlߙQA oѫO3 #H7T%-ܘv+WTe(=[q4 uĈuL%? LLʤ|ħVbЕv$g56vcxRd$AHt`!EoܟKASðEr$:H>K~Yߋ? CXIP4 T>H*(b7V޵mtB]/z&i`ߦA6AFUR{fHI)Y-A 3g9s.MMMnzĺ 1;i=Z4{ J?[ *&mIG1+.!ւK]#UɐJ=V zHOĜ`G`15oFL7R㡇@)㚬Bʡ+Q+o/dzHjPvԠΞg' 9xRӡPSOhsQge0 W0f3K#BA(~~{u8$0 5TP̍@Rqbfw l HwYYf(2aYLd2֤bb}(  'v̷ (r8t:Yܸ \7=(+|47TtwV@.n}xRoA6L?w]w;hHZ=,:*M@O`ӽfWbLpO\޶m'~=Rča2"I0YǏ̻L.< i{ 9W+|pu^t@ֲZp ÕK08_{?4̸pI/87nt Wʜvlb  y!suy'ă{䈭X5B@}Oj,9TBϠ!V63Z0B9Dm @e{)t `Y9izY0%'N)4EBυ0J0=gcO>ʖW啀= o+PPjk,m^ǑxX(R0#E&P({(ӈV¬&m+trZH"6DK(E"alGZߋַgi }/{<,PFH2KuG焰GJ%{ivB]lms{ғKqm1>'\azwcmM>$i&dԎ>xoP0P(|烑Ag" 5I$8)R]_ђEmo< b-tn>I3Ww!7--c_gV uSyyw N*>23>40qE׻=8V{ P,:KqbBui[ vWJ[# ];]ۑ::ζʻ֕N⬳Kt6`YVC(e?2B#y쭺3=.o6n6~A*ʚ6Ja <[FO Ȗ5-T5!Yͪ\s)᪠\7g\2n KGT-Iů7YM$`q~-,B0**/m6cL={aTOcBiEx…G̢BePL/|P.΁Y<f:ICO9/nՎY](K]~ :2l!Kw~Kz[2]2qvp ^9U^J@Ӄ^Q>b61%yo"2*wrþq? /Gmg& gs}%8oW;05k[q؇7VW@6izؙ-ifV$hZU1]^bkb6@- .ah1ZScTv;WpFEJӱyy<{vdufC!<0YҨ~IGw&6Ry\[_ZBm\DcHBg Xk%óO}L'[vcH* lYq9{2%%aNjևul6:z'l$0|yNL\ K6yMq q3M XIDGr%MĤzw%VȮ !|\WDZBZ\K'ezeT樹)mHLDHƆ!HpN2 5'bEJpw74ADҤ E D-ݡ)~*.ÕŦ'IPs[fX20yip0O*b{58hLwӲ:0HjNj"&J/5X<@~nLcMBFR^ZIT~N&s1fU/`<|*k $ȶ^se;$}x3_4I:ݟp~`3| bs2_˼ͯܨ嵾gS%ai1pH4apn|_ndW^)yڛ0]-{6]5UJ%j.c$ا\Deb%5g@}RshΣ@jۨvhܑ1\&K. 2ka^'oCj;vgAdNKLAfjh_^ 'bN/Ql+^i T0CUu 5i0=~|Qj~f|Z3sHskukf!i5v7$p{9' t\ˣ.k&8ÆPZhFDh租5vP}%VgWYg~v!OJCYzesj !*;7a@2& DS,I6,땍n=(Ҥ xcso:j;՜PBi`Fe1rE;K4٢M`v oo+C^6IpQe/bZ8yrkYnٛ#smt  ۚ䤲~ev ±"[ߌQ}P Z)7)ב6.u+ S K~0!">a}+'@UM))>&>@>}DjvE*K4M (*a.SH^mm^xeLx.߬yYv2neҸ@^w28-,Q6o&xwJ6 Zfx|2Z,ku^lxpƫUƷ 4 Khr?oc >W!d"us~>I6n_:4hf"XK"Ƶ]eAP'xt>Z/Ht2:cDox"t,q])-%#ܽb{˞@j`6+zB|ly)ŁA|Sl!Bx\塷ť(%;goӉ{dozE Lyr%xoxMKgPg&Qo@zh`u܍{]4f.e Wn{y\ǒ( 3b_KH)p ^VK*KȼOeXeꂷ~ To >;8=#Z<,@.Aihb%ȧ> u'>AZI^-KQ۲Ҳ;y+0EH!5BNˆP0)!NsEmuO!n69J?.tGA7&{ *#!|z5r]Ztʟq +fkDCɿƎ(k4DMmg3'#B{5ꡚ>fTN/$2O{=,{X+V1ΪJ.G୻ s:}6_R2NLkY(r3y=4_2 vT2{wfl|Ǝ'}L'W_x3$O+𶛋/yo2~:M_ͯzTm~+Ud@8;ia+** Z^T$-Z U楝;Q>Y \ߥ`BbrGrrAjYχ%2dә^im |D۲Om~-W\enjS˕B;[m K#<3|.q vڄisu7ҏ\,7]:ٖ2+Ad{=Ld?גQ?D:aۂvCoGʼKlكP*lݟ XO VYB? B~"B!-9 ~jDOGMáY)<LC1 "a 4pRQBPD hBZ'Y5'NqnΝ.ªг^8;~vq?T,3 mef3whf}SevfwrZmm/H43K]2.[%c뒱uR3q \⼵*yMB-*|_q عa[V K_ɵmըӭxtZ7&Jo+er= .P$TT:5^p _Pd8REITR^ 'QnSԾ_uoė| HY)I M,UrW2+#Zy 9EKP`BpSAB {25ZGR,z#͕UֿhVwm~Ϊԋz;bu3D)_sMedxy㪰SBi'VyOAQQt77~Fx5HTd&u :PCLR⼂KRD)1#dMSIhm ,؅MUǐlϨ}gs. #ZGdX̴%*aQvQx$r yf )S"ur$0& 23Aۏck,pCB7W(H$=݌?iE]vRC#X)# Q20^I.!G={*FR6^C$ZUZ`fxK=͖j_/@M7ow3x oWEk2@mq \6D ȇLj-L%˪?שYI@AIP2>p`F3žId8pF}H!x|x;)3N0'tc?iPFjQ+-2@|th Gۚ\2aūSm@s0/ `qW9GK8*㩼j߹ 6Hћ$-V?`ʙ ,+ܖ<6y y (.P`Ӄ= j@so]}V54|o{$`EYyۘAm"BP>K 'ĘG.|v5iws;¸CƜNCyӗ_<>gSK6VMuLzO`ekae+]N,E}Td4aPWSxucg-Gb`v,CY2c0#]0iiG>Pㆎ\Y>qfHDViNeIyeRғ/<ȁx 9PC*KVi#-D @8h$h)&XI&2ю\LWr~:dG]Gy{"^UJ,SӍmQ#׌(;x}#Y^͒P=5’hZK3y09-bo1,ZFG>(::뻸9H8T:PU68Yr̾}w0E[Dqi0F .Cw1KM̳F}=ЧHlE9cjE|%Ql쓐$!cF%k|Kcv:x(!akEEx 訙pDϸ9O1HhE5O1-]E=e=9Wx 8Ů^HBy_v\T{x{• :ḑ /*o]Cպ{w |FJB^ufN[͍Z)o+sxURu7WٴY-*V&LK VZ@)wxu7Rj wNA+E/U%{]m۱x2[7n"bmUܴ ]竸\Wi*tGo \-veêUzw蝫&n'ʡ yd@c5^ 9޵C鑱' fO\Qw! %Hͳx]sE>p3L$&0Af8( %tEud"am'Sȇj܏ :-Ia5EeNū!X *ue0YAiɱc`m zrNnX:yJ!d| ax!,Ñ>-K`4kN3ȫ`HU"ߢ~122pMI Y|L-UT%89-9"p3:1ˀΰ Q_LQ׈#AD(blbhz FUk(1@A@'g1M(dʃ=| ek 0e"Z;;]Gޞ9BU`Ԫ*YW,|Dj)HY5NDR&5X"iKP}<^Pk9֌w/ﯫl0_(]&;b+|19:v\^#LK L%xC~fdN+~k<*GUsJ|>8ëon:1Sg\1MǁWmЈ~{)glP$8aKr:Q:bԼ(yD<܎ ȇ*Y5A8(+$g,>RŴrWUXCFC>Tktc5cQ&x`SsšLBWY;˱o߾2:\4 *^ZbKHʲfGN0C$I ϰߒ}'}™Noe-!9i ' / u]I{ptpyg؝d}uqZF!"x7D@s6ѵ˪6߼%)ت.R4{ )!Dkt^'%`Ҿ"aBXҌ.wzqX8/P <ȇv dsn *btGeW Y.Bç쵱P%,|WjQJkJN hNE ڧ"(ʶtUaq:_8| t>\MJG"WG%yXLn}|*5mȎ ^6mz+Pȣ0.2'X5A0m[9q}ESq׿BO( s3]okYzdvȘo.+r=qMۮ*уP @ 61P;d<܍cM{M@ωtJRL*c*}CM̙BxcM̹@갊 bD-9W3閚q舖ކ ݽ wm,%'%+ jVn/EػaT)5ێ?2kʷ{\;d8 @SN@ok36Fa!s 8mfڏ{"g9)P(Q##Hk"s+I\d=PC5+Gt;q+o 'g_=]CǛzs̲yj cu 7-QQl:K5x_f[=g(G?(f~}!A )$Q@%71՞[#0S(×2@\ZR&ϭ+)Du`nmȇlIs[ReS00C,+{AXzQǸ2R 7%|=OmO -l&L83&tCG*]3ALO2'v|ʵ݅^w< `K˜)2oNosLQ#V;um NO+|\E,/Pxc3RpVC&{E,FMdK81yӐr%y(⍃=i&YZH:d4<PCUF&m+ +)YU zp` ;O(ƎHos\>?|j)0ȇjN\{as`!9H:#aV{68Y{!&ю\T΍[( J21TS=^\Uk&ƞOa&YOx3蝝Hz ]YCAzsDzp{:O\x"Jpj~ &h͒M"ETpUL$1\G]<:Ps5r;vu:Kyl&M*D3*RTax. A^(6;8QZݸ"r@mDw/t[sbVz B۶qm5 ň̈|!cwQ.~ z:[ !QNNJGStwPDr=߂Vui "UL+=O} x "h{wt9'lG|⏈i~76+_mrx4hT&FSC0hȇ*[t(>c>Jkt-^d A2j/&|ʍHJst\F3xPʹMQeS,aH(lv^z"2'm{2jL&r&|$`cPIWpV;A)tp͕3-^gq#dD{2%psVTZ䲨. GPLN@01T6޹m1 b}ZPixx:,PfGMIf("T-(PBF7w'2Uȇ\i22 γXDRr.S:BhX(*xBRۼ/_GG0]N33Kqg.o;1YWxB)#MEY5NzbK l 15K(zMrOPȂM#Ft>VK~ZN\9>iNް(EG5/ ?^4.pq){﹟$#<U隈tUtuI`Ji^t{ w#Rw#1ĝU7x×Nl*kn)3m $83q\}C'#0ׅ6T}Cz]܏oW5@b4  ɢ%"n1X^՜ꑡs]4"z^O%9f )4s.LA˪ FR1mNPP\x(1J?5UßݟÎS CbEիg uYw7E:.@Cӌk Eޏ/`O551:*BkNʬTI<RB\Εyj`Uְau=2.}جaݣ /<') y玜a~O6a~Q2Y8||,{eĬ#n<ԐᏦ[ U>i1DJiCJ<Լs"TcG˲0D)0hE2-NB(+kG<-|n# *Yku"J' eijtlsoCTf*A :W H"u\Ws+2[@۞{jHBr++9xS06k(HT^Qb`GE0PF m~nfz{i;s]:2R`:dgK7O}ƁpD9$?hm~Fc6CǶyZ@Q==*%iɁ}嚱WqYw s#v^8bm#MrV敏yʸ-'U|u. d&0AטgM3Pmǝ 2>2{Nr>_MAQ%H3yfHeL:[2dMM&PG}ϳ99OT zEȈ jdp"~GF0bm ͈K %ۍ 7 aېw`l~(5,!uwۨ6cnu.]CJi)J",LzWL9EB ޲Sezn8iS8C$`#C 4vmz2'2aC 1$s2Ϝ< >C<<CU=o\_@;.QVәf) *&Wa;VV1ĝD 5ٷsң\ʎ:d832s0Ek^"V[{ w!jmEDXlfLWe֖|祯0f  ِ 4"'g ij @rqji׿l6>5.M-o^Fy&nЌd>ㄖL#MmOj]oW!\6oJx꭮k5.̶zG1ƕQ&S|C* q0A7/9)6Wx]#yÀ[ + #Yi;ݯo,Uad\,HX KͬBv%Ve$ӓҮg/^0[5 GkRa`_kW'+P*K}Q6> lg\)8wS0T!emi=BpL!w#"쎛G(~rpdۗ¯":d,uaÇ ͨS)W_CLgIXUI9 IHZ ם-v1feM!jۨWU_aŸ{Y5uFUs0͗𗭩sdˊ/3\_Kު ~[3WTyf  *wUMu+Z, F\dY=XkKƽpRC,?)'nz%e!t d8ps{1;dUvr2Vrdx+=`f3fGGCFdpp ?zPp2"+Gʁm[Ŵ) }d< &=w3qD\*Ń!!7HWЕA*H%i2rwׅ͂n6Nx;/=הo;.{CFd8ƚIOm0^%q2"+Gч+H5TD6eą[WY0^Fl<˙Z㕃!Y9zrD}-'XQ`3b^ oy&r<,S\%yFqI: $B2װ-ttY^F\8J{ĸa-,Š-pO:LrlA[=Qx*#rplGԮ?8̽Y!bxvxAg/|UӌҖD&R0zT6.4Kdʉ^HPc2"+G?^9ۉ>Y9f[ iu6lffl=&e’kڡ-qp"3(w]Ddոun{!<1 ؕĪb͜,ޔ}TψQð29QO['rFU Y.V*MI{/*y_=I!{|y@0B U tT(si5|ɼ@T}CFdh*QҪjF2h{YJ)",c!#rp h@rkqPS}6e0NS55_.:οwȈ\o Q9:dDVbOi9*kd€"w?XvvqSRFcP0PIJ!q Y!4Q:aRCưg5< 5y%]$=`jQD˻TTTmqz*- JV;]^n>yɧ~eZljs/K1.{zz`؀[VE*lz7/>l}^{^\WXW:}?ng]X=_OYG2,ot4zZ"nC.(Y,trڬ:bڔRiZNWof9!v.ݎxܮK4uk}NsWY=s-BѶ.^{ kÊk]bV-svtM/)ȩ-"+i@D_9h!flޛ#6%˔q͜VnQAmE7Vpm"bE҉&t;庶)k 0]v:N-m̰+ʠ yAX#Y5e<=MBބy QX{gG˹JQ=#cm3#$2{Dm`J_Nof'I)S%y䫆jí6s9=0:~XAW er'x OZ;E c!chLT̎ƺ͹a,eW!i\ތyQ~O-\% ý!::taFƷz [5W cN/TY J:UCu`uGwuεP]{Ndo?Dݷa bBDEQ/^tG]ߖ88Gc5EKKkc}%,L$+!}}WVTcU⟲bQ9{_) OdN??w{KbՖDc5%eSoV|mMޗ/v(lZ4ً/ zYnw)/R{E~5OtE=B NLWӘg12 ;UdSݰsYf^䳔:\/B@]^l4[$݄#;nM-k,&m&yd1nC+܄kEUg !|Fކ/`;;6srY`Н1WThg߂܌_DG]\#8J~ͥ]Ә"\wun=G[JqξUP0y/ma[!Bić"kMh!PP$ TLvhr6ɚ:{qY4&e<@wGק9X5u;iLQ.3fJAegdB t[l84! DڭOf5wn6YhR d JGCU9/!{j[Es=NR(-mz[t:v>٧ږ>oJ Ľr}]0YLQ,r*)T:Wh17BFo(>GX f ._{5tMk3KF4.v47"!z 8&ʣcϕjme_S!{48ϩQ )6eKsp\PM? jjm*I@5I4*]q(ZL.E @+B[[Uu0hNz=ER|ɬ#R7 z85Xrhkrϟ>fMk?!ЭYOciƶiW zqZY6/+֧⁵lDȹj9GsuL+ON༲_MDF y+}E~&WӘjJt0( MLQќ2Ց~֝kBmN4j2e;G=ᅳ:{Ƨ{?h^Endj3)kXo|^>;B6%.;SɯW Y+ D&C_ţ EHagB_'!1{U|4(C,A3G 4@ZX*PFiNjlX?6.(aX<??([- Q?/~I-spqϞ 8>%Yq^33g fۨ|z$~طHp%Rv%d5j-I{aπةB&I LZ:Z)`;{srhԹVq<*HbCJ &JRFGvu@GHi['{ϣ˸lPiued (eVN#Kx: B m[)So~AOa.+pqaLƦ6c!0k$60IyT\Q\~m_Sq uzܑꓵENrXByi[m0xJe $u_?A?m|-IsX#mJB)8 !lm 9Fm1U{dẆfw-m@?`z2tt39ưK*"npTDoyFx_.O;Z0Vb8xy~a)=:U^:'㕰0QaC:pd?sMX#/SkA_7TbpVĕ*86R"mᅦD Mu޹vǔg:>3Fz5f%ƭ6]yGpԧ@>.M<VjL$~*NP~OZ ~(WChMo>oA?\h[^*nl#w۽yԊ|UƝsZRS(Ʈ7ohLJjubVk{VL)1bt',<_"#8[xsu 1ȕT6+)!V`WR|vj\Tu1v? MsvW'gZtٽ L}X\f!HIQh[*8 Ȗ\OGDmE2;{_:4sJ(Rq$&- rBXpӑGqk]> Kugg|]5uiL*R2 R:9L/|YI:F@њzk2QbFiLQCDf* #d$1,4fEW`*VS.?N-޿nxֺ~Mk]1tӘbQsa]P $ rR\.IEN_P_룋[Tٻ^wOc)d. 9(c769 lKkpY 7.ܯ|qiƾ__J'ࣇǏ[yhHHDJ E2%{ē{neV:mx`WήiLJJu"8ȧ ͮˎ9LQH"T$Ҵ^=5(L($kί*$Y/VS\>mV)Q1kJWΡ8B˗`(uo[SbcQM4M!ǫ)>ZSrPkJz}+tߏ\^9 @C>_W5uvXiL$*2v7T$U)G!-E, K`! -,* ȶ9cf;C{}ST sU JN+6Yl0gC)wUE3QWE <tOcSPa2*/wNܹ}yLh Ky\4Jۯ7qg;w"1Ro|V,1*2Q`Fx>I%<EGWǯ47bxW4I ":oVgl"scX%=5'IM7 ZkȌ`"Qpext)*zô%;JMK.AkjSV|;tXu^]o\":a=)(:W 4!wĪUu<)v˥;KŗZGdT~ ʕa&x_SOvL#.ez6-G$Er)$ 3/Y<ݣWO{Dϰ=Aʛ.WS%tp0Wj_?_5$*'AREP !QUkAYM@iDǕJK xh ^q-[Հkꭲ #">?=Œ4w. gMDƇʴG QJ_腑9!`*) 2NI SƒfI/Mhf^qFu'vlP3/@ӿEO ?uada7v|)?^3y8 *'5W/`‹)A?}{%F<[]IU UtX[UjLȭ£!} t{FAyqrG2EjSzW( <B|Szˇ ߺu#t>VmV4;u&P L7كUkrQ V/B4܂XojsBinS#WkU?lo ުAA]?j훔jܭݧiӡwOIvI:E".2m]|a ]D,f6{w]8TNa.{P:BCGY.+nG],OG2lM5C|~ol6{xcijc0!#hRMY"R*EP#N.&? _>.s^`g-:>]y ]8#8(8t¡ Ů&",<> Rd]2F-y R>&#M*хo6ݬ{V52Uab8P(ã (bVEp40_9{h6廄[YPcdpm `%j?y_&y=<OsP,X$G+5:ІPDaOY1.Rzڗ/?LyoK@zq:ii9q df"Q 8Fy||=5{o^kqyPljф׼B] j ͎GF䐻0`da;s|Fń\I;|oJzx,:?} nIBWIa\z$S7b8 al+.-3.T2f"XMGRzҔ8'lTZR/ XF(1`ӖijbDQ.{Î,5&o;p.OFSˋApǖzpwAmG<FT tacdo]ĈߏoB]vK .Aƨ=k^_t5mڢ9,>8e3Kyʬ h_c⊢DA0\D]# xI] sQ.!G9ShTz-d2=<N w%! Am&3x%Vv1p$R?{ܶ NYAf)2"*Jv.5E2%[V_-2HPqE" p_wOLP 1jIQh(!,\H>WH+bI53mjϊV\y9v Ee\K9bsis:KI 5sCV/*-%}̜y g=|ZA:`eζkrAB aMCbʏRk ( 7%}U&4kLI5JVN}/o~L ek(xWC޲Wc6sKofTWzY/btӐK;V{'|Y٬v3::+$R ;b 38lQ3sh_q2bR3iZGaOϷ6w^7߫792,sgi>ή)QJţ%ϫGe*s8.‡ -B#ʼn}p>jf^DN*K9&m#d,+/$rQ3s*^21˫RY 2 p8A%L%TkT|lEӘ>~|a[np\vhX*sN53#aMWJ7!#; צHG GA* sAI(iPVm"O9v"Q$8M^`Ϊq-$|$ P aC㼎iUm(&Z. ϑO$$]0v,ʹCIR;g l$5)RUWXB!VX`sVƘĤK\}0gW0EK%}%7 \>2U #I+saCf@$$Ɓn;!ip򥒚V/Y眳BSlo,ll{`s(yY`o)e_ɧL)WDQɱ~RǸ4_^AjJ>jfN7V8uK~" O6Uzj3iS>YĤA`FLd-TH:şg mA?Ԅ9_)11 Q3*pd)uFYA~b]1[3$}ˀӦ#2~{bxU*zcTE-R-/kevG=E!=a!!EӉ$ǡfygBVц=p|; CGL hj`C1V¨Q@;؝Xe@(uK -[Ew^ewÖ8.[!쿄/\B b" \Vlm.˞|߳ҁ˂go fqlxR]{1LN.jwhF`d nw~H[`%kp L@0VJ" G>P0YVl 8[tU~&8?G&oal.0 nza}cӵ^fݡj]b1Iwt?~?g~be`,^Zg |\_]O4 “ zO9|a?7Cx<[gRbavVq?ԻpIaT*xQ016r&Fxvφ4 зwEs7/"ƫ{>Ml+ ' O)O֐ g·7[5۷۟Z߾j}{0xP~ꂭu@Qg.w6cLZ2ƟHwy?&B^EaK2ix>%~ܾ{{޳py?8^Yf6Y;*HLR/'wmqh: 33cx{k`uAoޞW_bOO3F\2pf~c6͋f6 ެz5qmk:Z+瓬؜DT:Zld |Va*^y>uش W*CڠNUZW>l0kX777Okɷ{Y/`?O1sn7ۃ+ ?ߨf0f$lQK"/EkoQ}; GX'5Z?/ : ew[$NHv_x~> ]q?~~0M6թR*zz)4!ރ߼~xOu 9.w~*x^΍}5Rz/nfZè[gUwgN8>L:>r!@GAw8~(ɌO hs6:瞧5j0rT<\4 *~Iv>_  d%{^eާ1X;קN?f.TtOS%6y)/dʼ;ΠLey♹UKO ѢY Ba\1)HT}`{wJ ]1%%iD#"#V9@?sB'kM19Ƀs#k(o1#?} ?+ ?0gfq&nmu ј3l=ޞ@K?7p0ӸffWWؗ$~n;'fxjîߐɳ40nmXI&{0 j+j xW{Y. @Pc:vKG#`ߦ/ [k͋!t|(gޛi3+aV [x}V΅h޽FR3#0 bo7-_Ԁhu9xuz=r-xx_V/X)fb6zL!JJiO0Y: LHđdZ}FC 1vRI%4ŽE$q"l_h&z>&pX0os<`_bҀ@Js\Y&vZLnjxnN>4?]S+gQEO8a', 8Rh-"@4Yt~ܥލ"x9:;8jw{Gڽ0] K |c'wSh`ۅ+jgdR"%cE;sxǂsfSͧ^y_ˁ ~:Obf>&+Z\-NNPݣp~5Cᭊ }#$V3[+VmZ᢭ ۪(fhcݹ05:° "bXϯjm#Sk<4~txUKx};i޴'Yrq?^E1K3DZiam3R1됋 Ǝi 0 xeJ?WF.Q ?䏹bS "d8%"0,|8?JׇO*R89QB$Z9us$X$&D*\hPt;nHFuQ 9v 31J!f%&:#DxGUpKXhd.C @c FSbMrV& SY9UGD7ḅ9z޻ J0c e ZJ(3  qBXXd|_~jw{?v^ToNn,1B\0`8N0I,2 HXP'& {.u@uD\(eCJ" u *p4^P&N(UfOt>Wh"},AFijF 0qpP,RqySn%~u^`A?;S1O>rOifp޽DR *hxspY;TC% /~>9s[єr 7טÊX 'Xve5AhSa}V :v:zڧ݃Nkc5e\aEY^8h #uGJ (΂3(iKQS{Iuw]7Epi>l9nwN;?;{sNw^6H;"YbqJ0 aa[S$/{Ƒ@86nbM&_BZRJbW=CREjZb `䰦n_eK+ oǏK@GpO΍sf @\ܚ"R>>qJG@j(;/ˁH%Wh@5"r%skHdH"I Q2"  2k O+Ƴ%KDwR];7wY\?E^:ygדffav՝l- 8tmhU9^ꛛxtU3Gy U\]~2gh<ރ$NO?}՗Z9͢E8(ۃP|eu8}ۛ BO()V4▙t8&riww͂QI*So|97gGз{ϧqo!En޿M#  >z|xǎ A?d}atrAb*f %aLoN_o)ꬾ,hr¼I2 u8nTwÃfl2Y{58﷝$_4h.A וdtlX pV_$ ^)Eov߶h&׳ `խp'@Ѕ!n_~&fa;O8P;(^6M~L?uǥ+"W ^k;{c}ߴ򤧲]t1xbĿk xLVDvs-Ӌ*ԿŎKr.k?HV$+5!$>ԭ[ʌ#Y%%plBt.5SLxqf-!UHWZíhp5DK$*Kͷ D \[5#X>=?{M< 3\~Yta 83p0i30F`۟- 1$Wň1036(-6xʹpygz PmM*T]vO6_xe>mmBDQ%~A*,P~ Cw*>TLS1M6Y- ň=P$*욭['r],}+@^¾Jԝ/)V<l4x'OT^!ថpՓs#((ۂhRclѕX]'k)s1e'^vQ?rVWqo%D)!ۂk*=ckEHK*q\&a*B:MsaЧT 'm' o 'A-!T5'{Vm Wɯחa)diTQ\ v/ZI~s7k wN]&dc{tu+ێPM`$s9:KD@&.wӅg8 G^kjE4[Qb.I81?]U-QA гye1):~b-;BiE<0LYw<Onm['0Y6WhW6׭uanm[4fwU6׭ԭuks\6?g0nmﱡ{^^~h}ۤymO?a~엚?Ru?vM$~Ǯ~~$F0֤Ǯ@ǮkGݏ]c0`6t⁓ُ~zr2t  ~Hl:;n)aw|0NrΦ#)6<8$YKwV#c)G1xNY wѿנN҄!B?TaTq/b[4و$Vk!&*uC{8/ġ^0ֲָOQ:Z'ڛ" Z5L\p3_V_^|% ć`D޺4͟y?a. p>7daKCܗ>vM̆ߥ{Y@X~_4͕f 2Ll-}g͛?A3`=kRo\ C3xʆ-'[1&r+cZ9.yp:vnO[fg_ݜ\7Jޕ~am2%phr9+5(W;5>|Ͼ;/췝ItsCWՖеE;w4J$)o|C/#JH|R; L?zS7Z˻J.oq4ps u Qdqv+.x*m67ߗ2Wŧ^N{n5;"3],`yd(Dd(MF;H! ) mČnd?[[d;9/鲝gkܥYW"D)L@f(E,:~s3,v6-,$}OBDf*TEp]"a%qhEL{. ׄonj|h$`b&gWoL˴7(ȫX=JY]dr< 8coƶ/y'A(7\=OjĽ[vljN)Ƶz@>PFRҘ&DRx-<Uٔo8!|zeɗaYĨiYZx؊!}G(eE`xm"Xz&* |gR)3&GD0Z>C 5ি\tLHW}IKX k3K 3'W{>+M 8Ci13:L&a&'>IxU3ytz Zյj3/†k̒SǴ|nC^wG411Njox^t,Y~Sl2894o~˘~279Qpt7A}VCxX, e{^灳}$KYl,ᥔVJnw'Y 9 }4_v~2g?`d 7h' 6OS8Z<؄bȘXBHp<\l`  ,x+HA{-^pP8Z{ kEb(Qr$PfJ0τřȍrHqZYguT1?Lܬb QW!ILZ[e 6!ϠPy4-μR0Cmk@iFZ*(1'@yJ0,B)fik=6%>*V3(`1Z\ S@B) `V"MCgϠPy@4"OD8QH ȝ-!ԪϡPyU<1]" Վd#HXg3CQySpsWC )#3K@p0AZV{f!'!J!"N9IQZ\q8|KAK'6;(b rc3^689r3(`>s&ᘡSFF@'L"!}l!tP(¼TCR%$F!-b=6a% js(`>ã$U.^B38em-u90op'"֐:f18l@·K70ۼ-@$0*-rO9cQFeb~B&tV!a*1 [!'*X` E G>Ӓ NK-H}.J^qZ r`,j Y_ep|"̛#F 1,Ŗ UV9P_ )S`>B 晑ӺU#J[^HlP(0tmjIHpY) 3}+CJLn*XB N {pW1%"2N H>B &%/GR؃8{n ̧BHBNfZ(.-J`p@ϡPyGx`~ѷ7?wMOVW-dM6-1@脁jcZDU)*Aa1 h1yW ?IX] f8Ύ^qiTtP0J0C<-Ā$x`؂8֝C_W!1q2qK$2'º0&PyJ{H)W 52FQĬE09J0/eu/-Nju*K*[r(`^QZGHj rxFNCgP(7T$ա"- g:"9p x "!{6#ĭ6Hd)6 |0. y+!*!0<70AҡbBu rsXPJGkk[<P %Z:V P:,#x/f %tw2Έ"d4>U烥@C N2B@@BOq4 ZZ6s(`ސe]'}Ak)j9vC2{EDRP/CU %'ʬAj:%(EeZ°6ЬCLҡεd >#I_RQhୗ끬.B 9-1QJ Z 4破<琱990y~eZl -hƃ im?d9J0/qͨ$8S5Aڀ*pՂ xs(aߗ07)1,93\{J @3(`;T,֐*G JTh^DBZC@VC~i惕KZ J'!GDdpܵCͺ9g^SUe8tX,13ymsϛ΁FG m˖v\Z[Em˥O[2|8JWj Eu);Uqݯ/g}ŝ~7N?UL~sثCu?&wiXbl{\? pC}NM'Ox2ʎ7sUxcn_:Tj¬V~kLF73Ѹw~\>]>^ϣԧ`oޮTuhQie-}Y:ptRUs rpSrvRM^łn˽9ev[]N&sKjt-ox3yEɸw&ẳzi)L_ R\\Ô]:hWh1] o:Z wGLy'FLdevA.8liǢE}8[MS \xD4:}?r˿p9!_Ü;;QQ~D%/nzYxx;$$H2h=z[ݴ"9{DBt;teQrׯ::tWܫEUKpt~+AgmOڠ\#)H@Lu/攝!@/h%'p[9G$`]S>\T.AUIӤ35t0&47jp*U Qlc6OyX]\Xvxċ]ORZ,s97JwU^#qwwÉ(~ʭintVf'_|} /e)82OאQFr9s]E OhP a##F}mP"sb:ei,o)aQ4gR ə\mRFNuuP߁of`=E{'E#$alyZ>o6Xe[`)<2MV_Ʃ*Rv/yO1"75_cFUL$wZD$F: 4xKJhiM \VyYQh1h%TIn<flm}{c+[VZ/ VR?_VkmsS4U2iYۄ,~I1qailvE-Vu!g]U ^:/j%ǀq.]֥HAYPEVBHZcL{V| T 5Avac2a cXW@9o4Pc3|8aɸSVg> wN9UV i^N#yOp瑼4o´5ou2;!Xj~3̎3Ԍӌcѷ)jNՊ\gҹwiR)T^]%Y+j D.c*e' Ge!NEܘRVYpuɫȟMνP6ptw%_\?oF0q7냤e| ~7l'u|(I)պYƮG7VkK%klY6U#膭jB`ŏg!1 V&%]BfxS|Of6_kEy*{vQS7ƁBQjiN9ʭʓs@T_L,{S`fXt zo^oddUj-o?r5'~w<ZqʚR,յJW&wBl',7Է`2K?P3?vj;hOHGE~Q\ʇ?|8y[G-o?X53Sn/RK&DŽ&8Ι49QextFV(N|+8u\1 Zqst!!ַ&m͔Fw/.])Dߦs{~+b@YE+^CnNrJx%uDrAeD(-uףoזy'zMBEaI,D2DžIT.<;j]@#%%fmVXC 3B)Nɥ eoM)a24#Z)ϑJgQZpkאp72Wr2DϸYJe 2Iɤ 'aZb33cš1 B7s@t I#JGʱDb,`k=&K4j4B}mW2k aM,:szU0K1>$t4Jrs6:ZJgĤ\B V@aA刴ĊG+7&}j.m.P$GKsPT#X[+;8jOB=n6@WIY(]p)kpU9JM2d䲥1Bi/x,uV%K)lQp&Yqqu$~kܡCr IZ sV),Q> !h@HB_ X C`9IĤՔ)_^hIŸռx34@LT"Yƍe s㈘^{`ED(,tZ,%43H`.QPcBvfmeiK@4:ZY$BYH٦B{G$EgȀl @kECNԌ8dLh p§(`Ry*0 psp ʄ[|p  j!I }`9B>`DR q' 0X0FaHJڬB61{Oʄ[PfL0Rnj]@P* pٱ}m '-jd2r̕R@rIRADdYV4!QѠb[@.F !.%(O@nTz2)!q2M!V@CڦmL(,|qD`P|.:A !ؒ`@gRv` YX[) v3XFf6S T!+)$poMM:$)nld^LAV5+FRP6ZG˷`ƒjeŮG*֚%9 J @I@P,o@SʕG<$d `se@7o0Qs.8G|-yV4&OBC'PXp+ NlG@ J  AyBS YXAD‚B{؅R E$|DNxx&RLg}n(b"NC}9@VC薇x$ N8`uU$ 8Ida;I`qf&c,#!KoYm{-FusrA 8dy} \{# Cs|tIV6r x* 7D":X.7 b˞gk)Jc"H<f n΁ mσ#`@0A S^ B&ȠmVfl3J c>'xR<8`Ӏ R2+8)5vEa0VLt@Jԫ` <@-"pwF$d.EF̃Va1f裇9Dag^LB@*:l"H5s /tz@XZ*jL8g,i6"F,"I{ނEhU #B8y. pi"B0R^ ɘ, BK|ڹ<[!tzDzM; isUa&NI#K g@7%NQD3j *) HiVʻVg,e98sGhvtfcT EAސᛶ 3lvFjJ  N x ʀ\Fg}D6 tCSTٹּUD .I;T@=LSB*Ё둥%C< =Jv^!òDc$B?Y3dE^aH"wmY+(NW j3v755o7JSBR]=(Xlp*_(|їi۝<>qyr! +̝4 {u6VC2!v*I ecx)UZ( y&#>S#kcG= ⟽A-p4?z1arYd 9ĜB@l,22uwO 5X{ *g(H>s\̤7k P/F9ݩRAXkȴPwePX0`YR T1=`e"Sa8Ldh85Pj2jk,qJP9Ԥ[%A.d-jkM$$*jy ڲ FǔYhM܍.H(YQ:ڊW=VߋzOﲢeb!9*$_r- Wj=뼯Z%_#ݨzZ<3 k P 饆TBV ULZݎfnWBjWr[ p WV=^IP1 : @敏q*!\WB .]@B Z Ps]pC\I )rO7kP]]WJ9t@J+L(BFuW Uq5+}0B+BBz+\Ũy@R`prUuW9?LZtuO%='<-UmĕShߋi>Z.tD |Sh֣뫅prRMxYfөRLioI6}ɍZ~\k._P=?Ͱ{E7sjUH /G=*\b1:Hj%R[.WI@7''rguќٛx6p/fnڿ/I^d;uES[{D]qa]bXJ*&I>qdgB)SZcff˺}n5ZXIjgΉD=݁Ew%b3Oj(R 9W Gjw{dPh[gT$Cq6+C.UVίDQ&LaTHPu `ˡ"ђ>C^CZ 'B&ynT:+P9"x4RlWlȌղu^Cm5RjM;jw Wm{ WT=d@B++9 WV UvCz\   { EtR,V?19P}?l}˃|Ns|[ͫL ^0 Y7* OW/Upa&&"Q .q䝍yj 68O|U0h8}Wo2aQL|5^.atk=usxyRS﮵zˎO5ۺ}vaƺ)o6){){arS(qi9|ȯz&%n'ץU%⪬e5N|3$|6}ԗn+Y ,Nh&*GT]|y.I@1Ϲcg xU#>k!Z3dqrcx~@\}_cs##:5 4LM\CDu}jU:Ԥ~NNqdQ<ɗfQE~Y/qcBR6ΥzrV5roeL&nØ;TMh ̑` B`-e}򺈪$܄W]\Y2>>qO-dzUkF:/fc@Mo+kOK?qah_ v>+ΓyH]#gmxWj^w+wӓeEO{Vm}*Nتg1?NCgp*n6>nys 'Ѡ[S6(5Myݢ =tu{~{q[Swn}-2\7&mv=tmI*./\)Q"K-fQ[f]%إm9/;J(~zPFd4(WcE[JKM3'p$r Wshĕ AV+ r+T)(z<V2%Xl$wLj&AwWږo5S;vh+AڶҬ5[1\B՝4pC\ )]ܶRiVAU Uj+. \`Y*Z P3]pC\)R+W(W+T+]q*"\WZ{сs +d0 5OQ>) W(؅]\łPmPdF+P z @nkU3>gW J PpC\y+ V2\\ P+TIDU/BjfmsW亖Fj=k)lnJ6$\m[ڇw v jCBM9Wp\sPe : @aBz+r6 \`e0BF+Txq*=%}P 'DF+T}\JW=ĕLʐ@DW(WPpjz2Lwx W+ʕLZy\J'W=L(4/m0' OPnLO9AQ%k.C< %8~(FMjԠzUH@:}K3ŨZhz8i[ Pn8KAXq6z5dL8~tHWmߚ5-㪙.\5Q T\)նUϙX2?>a7[#̫ `P9,iyc$g11BVe"-]EY.A|uFq|Q̎Z*!|bBa8ogU{qV TCkW?1F!|!ƿNn/g F4i-X>WW̰C ^u4&{v\ux=b?vixC\6#|BWl<{GG³t1Gfk {wQnBL=n5VoMw2h,c/[4Z97Vʶ~7}Kw<4x m8/Nrh:.N{.~W. k~3g ׀[(A2X>ZGJ3=A&wMKeDN@=n7acq3ap^d"58w;2sZ.ЎMPCBQgyghk9|E=Gmb=S{_VIO<;Vgo Èj9hfSXji~KS&¦@ ”(]#2K*ǀyBJ>z)#"D b FрG0p)cRQ P TOB*Caj3jX$EkbzlRk45BZ"Aw4Yt#pV@2jEgIse{.XR y%B (wB?><;9zx,LOEB)@\``NqSm_)32x5sfRC/ンkYESI J#P!`V0/2S@JH]HNZ~fFHHTH =#lbA% 9b$53q'*-Uà \H ˡ188$T&P$40֧<>RA3O;.~T;h3A?S ` ІFa0|F&eE߉~t rV{];q-zU1;߿Rz(kc&J=ϼa9j*f Z>==VuE]*0$5Xj8,`%lY$9vЗޖkӳ*,ke-e`"<AaD pቀ57V"Fa 8 o3υzogw*jzYn~̵r(+'^R]wGwg>|ZmS)@ 5+v09W a`p B)R/4Ҩ{ூ/~roj)U™T\A-! qa|P( Tȱ栀y 8kQyu"$ T <: ʘ8P=8k$Q<"4NV<;o 4{z.wSDگSwݤΑ tR% YxL"ݼ'"]g,sqHr {]oC,MQ*a ep9S%PKzMܫef=F>HLJrHq||BhCwK2m+)J(6aFD&1ܤR-eWҲ董Vi`1]G0KN,:I Y#A-CL ƘcInKY/%rcKmdK`$"hY"MK/F}`'s@@@3EġQz Ρu#hɎ5.-Yd\N Mϝ^ ylO4Jv@'>+񼫭7/ 5 47[Cu䆄U+yѱGGNZ>3Zk2Z?ó;lBɦQ# @Uך`nGBlXFsk`9Ӭ0FF@MZ;;x눐2MAE!*Pz$R 6P$pLCL Ac8I6 fiN4.y\N坡An(0fGЊ6yn٥fFzUX13ˏʹ1hч9 הb}nw4Ӌ8);A9R+4d xT\~Sw\3(<U`:Q Ɔ0HQfiR( JI 3$TRL@6p% d Lr &bcS0\}X ŋ6tI>?/`Vׂ-? d'-JY=5_] +~ 51}U:l5M#J-NÔ4ѵy=Ї @m@we_Xʰ 0%t뵻`,.Ւ(GQ7Ss䩙 7]NHC>a'Z\9lj0\8tݙ--O)TsJ|z\R:SgF &5~tRF9m)4}Us6 NW=ZAois{ li%+&F|L1lys~gSy6-oTr1_~6v}1ݒ.&y9jm5?5[_3E~5Eݰ)JZݒrt~)@f=LfpCƦ+ͺiLJ5vSHE6*ލًIA f)h&rJ7gR)& kpia"Z}] x Ekɣ9h{[3'Q3[3czBZœlܯuu+B_YJ%:f1}dWmxgណn,bE/fWNL++6^t$ɾ:ۙ7] )gFD!/XJL"0"~FF ;!X/9oORշ6oI74Ԟxjy$Vd9eTs3Ca-ߙ^/Zuɧ|Ubmk!? {aW|qPeS3rwOpғ9nȄ9AGҏ?>$BmiGrж#c,iXɰG*߹47q5on[kL1iy1ug5 x2q|] m\z1ot%&9tJlѣ`ŒU(4$ cof0b.,]nZS.VBNjNuؑTP/{!0xNi8>J=`Rb&y$TViǬQFYJy>%&zq5lў+3g@띈Z_ֆ_ܯ"Cw>hwN}|GE_D=NFH!Rzb,.0㩑PFs@lv$Ư< j D9S*"u$ 34F%gZ{㺑_i엙YdYŇ|p&Hf1_0i=F-g=;}tN;VŪSE`KZ#=[}k> pǕCq?ԃ~\Ky+=u!o+;]b $]DlJd:yQR%$qmsRC s?䚊,ޫJSR )s!Ś4n =hkxeUܱFxMn>[a:?<ǥrFCq(;Ts}Ӭ|'lT[=`ٟU ˣ_^=[.gWMW 9.vWY^g@R\ُbaom[.l{ivǓ^۽xpnE=)P8ð sY~D zp(X[gÃŧ3C//ƜOnZ<$F-}=>}zxq javiv\|J9>֘w?|5_ʧC .A0難;A2_y/g#6kG}y`)/hA~߼Ƿ|Oڿ?~OX6ã~ oЎ mnBG 0!0/|\cX#=U뚭˩f:> ~kCcnzʦU7]~ 1.;<<]Zި¹j!n"-ICM7i%T/\sc궨ܻ#gL _3o9ʒZ N c&!+*:DC:ʩʦdjCgrt4~EMICm|;a G^^6wMg0atfymlKsnmIzw_Z^N[!YK~~˝}ǂ7*N"gRi'ŤwTïf8ld\ʚ1f2!cDH#1kQM$Z12K*SMv)Ӷ=+d;ڶ94Ҡn2g7].{7 -=Qa6j:o97?C3zwL<e{o/|0BNX3pN:Zϝ:J?];.U?ࠟW+yZZm+[z+jBtB<p ]ul;]utJ!+z:tUG+UGi^ ]O`2ӡ'CW}x(/m˥5>Xn6ڿxqz衻L? kO~u=lv>Z{VShȵv2 h*ɵNK$WL8|jO:Z~Z ++ywKWUGk@G/8T pp2rt誣OW`Eҕ*]u2u<%gXG̖^ ]k줶; = NMf%骣bCWKvxSo~b~hgE]-]=tɆeBt[ ]uBWtQ^:m ٪)ؕt pm ]mzto%W^bE|caؽc=_,DwiK8B+5Pcˤ3ۯn{pqjl*]j:w9y/Zarq<9>՚/z1x#{ޯ?vފ6PK+!=z7] ^{[z궬9qֳşg+5)alȪ%8F\0W2q@]w`oĦcf[=DgUCt MF;[%MBE')5*UJRҬd3Exuwh_~:9ɝz5[?ײ{W;_rEJj%T Q.&0IЊHrKrA|5&Q(c-.Tt.F77T1?t?^M &)6aAvPPv^~8DJ ג10DҞh]Vx'OւU2d,F J(52Pl)$뒶haZoB0|_G· |5\65QDqM* %%qJR-$\cw +f,cI1)JcYUZN1*늳gpG[ۼ)SqphVR![І4gP6GQ?]. w->劅FT5ZEvs_+_[w,DST~<-bW1b\jڒY/O`ɐXlB́1Gp5[#bi_#GZkDXUBl]1qsF'dF͋l<⢧ VM٪\ xK-6JȤ10frYx U.0%EM#hdGoP7W*KIԐ2ie«P4.CZ3 `1K- O a[PQBQ8gCsmgMV^4eܨnD9rEz\,48ۖ "/U:]:CH %qPB=&Yj "s۝f]s6jE}|:oma!:['78Nў[1/UQ5HNW9Trv'"B 7m;wToKJ7JFo'I-GB˥W20w:"'aP|`QB'!H)DN+̫3 3Z0=ӅaF@(^|$̗tdP&w#fH܆`m ]ŏn΍d!*T?Vy*NTepۊj̨jf8x0XWcí~|Xϧ&!JYHuXA7nŰt6Y z+FĥR 4bUCU;X1 Rexy*ažc'Z(i8UQ  z΂ U qO6Kt.u zPc@s'UCgɈܤxXH4GUjJ%=Lb2 (Ip՜ZOkaZCAVA-ޤ w2*Y *mP ,< VAP0j(Al J +̀|" =pe\HO4%0hG ޹6mdi4ݸH}h7UqդvTm%f@Ho55}uAdfSG80x~^Ǟ n`8 Fбe([@IUЙx . # o6Zg\5n4DD r'5AzVM~F$pnW qM]HJk| y( ]PC`xg<a]pB<`UKp!wz-dXcѻjF U g]:r Cq N^ƕTX3&%\ :[3W{=?>z>PP"A\u6܃:@ =11#ApP:u8ApP:u8ApP:u8ApP:u8ApP:u8ApP:NA#%ZE&d(83Ơsp;9ApP:u8ApP:u8ApP:u8ApP:u8ApP:u8ApP:6 )u-@&eJ93ƠN5u8ApP:u8ApP:u8ApP:u8ApP:u8ApP:u8Ad z :l<:A&g93ƠNrP:u8ApP:u8ApP:u8ApP:u8ApP:u8ApP:u8OP~D/9kV?}_m,721¨R+\䊀6sE@Ɛ+QF(--!]!ptdtF**BZ-sRͺtN q+ 2{]!eO1* w_B9Vi#Tͻl~߬×3mq欽m]:b4ͤtz*K\]JmJAL|~>(z9}*77{ul'Z_m_8_4%a\Tm餚O{[FEelqUzҡ}iq'1J#>_JO_* i[nj/FikpunJ3-M-b9XN5m(M vb̴66U!Jۃ뙾Y:V^W{d~sVoY~~eRY śfoUbb1^_ࢹ·mv׽y vC*d*Įݣnt߱^I,^,Y* fk|pZ\& 8+ZJcSۯRhhnxi><Z_Ns䭛O}6Vm?-VSGُ:Nq,go w/]tLkOb1=B݇o>tuIN!vqG׆Q g"Z{Ȯ֕1=C T10.-kl/?%v{~B{)m@6͟vqw}Aj2kb[M>WwwNuXD\kLP"WOP"e)ctRx'i2&WVu ;=]~i.e6Gǩ(0ܩqT3-qj_7U_ }UݛF>;̪WsxpR *GAUJy'dfݥdP\[`. y.-"=cwd\KqGD}\){t~_:A((:;WI# ){'ƣ}IHW$t?@EWHk]!eu:늸ܪ82U0 UnæicDWi1++gYWĭ}$!]7MEW@]WR/J)߄ W(Mx3NR mhR*#hVJRA`/ q]!m (u ڊ riNÍTt (%mYPh94`?Z40 3Re%ʱv-ziˠw<]!n$+ )f]PW,!]!udtR2xu`;&ơwAI5"w]!ue!OGWk ]!m^W@rKDW) 8ҙj\+]!~ )5jr6` dtNP2uQ# q-]! )!R]':!ȼDZrR:]QWh$!]E/3w^RwRj˺z4[?Ƃ/dHC*1`0V 4wF3]]yծE[foWk*Bڡ6Otu5B] ltNG2B\ i]WHw5F]頽t`p$+jU{e]QW&x"!]!pdtjei* Rj QW6 !]t-]*w]!ENGWL#n TtFu:F+/d9~˓”MIH&ӈR.F VЏG Ϗ"SJVr9%EzVf.†r~:už)/.}vvrX3,Tg_j˦}{˷Pp?RKoMc|;lq0AE keZ9[j*&O5mSk5jͳ\{sd6?T\M~*iqp}]A/'< ̢NOzj ]at;(~ۋwbod9k ejbU7*/<Ϋ ]Nѫ^-U=LNvA*# +eTnl[G'v:zz;<:}Ѧ1_|'_7AYnTnIyqOvm}(j u'gOɟ|'/GK,D Y\`h߯U 5rm7vkU,.o-zy]U{W6hTޜg HR_!ms"mK""kӭXMn Yv,lˉرDV̱)|D>|yDO;NF+yG2G]p6i= _Sfllٰt- C|e,; ZOZSS ZT|p@vѶVOEߨZl)Ϥ4 ײ.C-߽3R~z6|z-]G]OJ{{aTz&{ڿ0ŜL3ufrpa3_m?۸&pi/Fg6Ce}];޶n՛wu׶lbw5MeW>5;7V ELqst݂N,]Ev/p0 p&^<]g͒&^mCbq6K:&F]^Rr5.=ukKG{Ru}F7`QAX&/L ]-1v=sbt{i6&Fn񛕛? h/Aka'vI/jeuR^dlJs4G}ZQrH3,C>QV]iWCʬ72z(M]EW씗 b>MV*[~aMi|iр0HYqPswV9Lr: 833 Ġطq>51Q#xgwh9앫PX+Sy^n]/O=1rmlؕyes =0'VMqe/!Qh]=jW΃h tɎ؝ RP۽r ,zz*I@@ :XƠ׈4 H.#XI[Nt1R7w<xg_FZa_íí33P.ej s,y'hFy,c#B:|W,yZT㺴\ufC`]zlȵ@Y6z=0k6wrm]|֘߫ J=syDOUC̓Uģ$Un].dUM .&N[xQ\DV|mk|Yel@3Q2ޣ홖\\} c[+2Эyh:FW﹠KG$U+A>sL ;$ EZ*T9.ޭE,DHLnmPbS),FG=U2wnw)jgwaw4<0 )`4KziٶDL۶PY"NlҸ==A&QI<&tGVS7d(hAۊv&ϳŸivcta8M.uѽ!u^ax؊|FWAٻ\P(!BOՄq\yGv3"-]x{0{&ޔKWHTd8ӞŃ6gӁiᩪyjQ?VC[L) ?8{=z{ǫ?#̏] 24 Zcg F?=|TP߸jU ɻz 6y }y^PgQ q?!h]\o1\X1nf?؞(45O>2)]lVNyx4< dXOMomlEc%4ƠdJtU 8'LEp~ <$v꠵8 4U3DfI"qBP NXe`A(JrMJh֩ҫ#c&P1TDWowP;&\oNU6xӍ7(#H8NL4I E xT B@*:ADPل*p_|[.l&fF%Ef n.]/$VT92v=ycÁIA " ՎT8ԙT0dh4\ħ.c|SQ".UĸO" 3 Q͈$j>Sͽ< pEI( 9T AXCPH6@5Zm`tlTacJW$=Kۊ]ke\=~>:aXE@]/*ZpŚ;sSbm启C'>cIsg~dgᡖ9xs? ?B,FI^,?ؓ/ؾo_X-%5ZeX[0auFp{䙙 +9_[һڮ5Hp.$X񳁙1zpɢ?)y,}?s|FsTM/w^bID@1`cEqޮ{}] {&,I҈ naJK!BM4R Cb60ۄhNEBFItnfs}a_7I3TAqVv5f4WPmNc¤"B$6C$ MBJB1s8Q@v`]u=tvuB vecnZ+OpԿI@qeCk:LY(I 3?i7q CIBbv ԉoe4jW6kHo5QWA ~`ٵU^6^oа*m+ !!_hHT? ݜ 4&*¢c(i"dN`NW+f?0]Іaj5=.+o]i]Y4.¦ЕEKNWc9m8ǨAtespEcӕEIqKWOB! + G+ W4F](z PbmETAte7H]Y"j ]Q;]Yt!p Ys-7,ӕE)ڥHW!Fh6,ܨ1 zteQ"ѕ6EWsgM+NW%k)ǂ&2X5nhSʢӕE2o£aڳH+dH4%3{\^2݁~, sIh^.=NԜ*}>FKYC+Nx F蒕>2]ܳ(Y <:x X`:Tzm ?4iRiEJ 2Q=l̃48M"8Et5ᑛ ޥpP~++Ep#ʤlKکFi0+z~#N oze(6Ds%iV2"i>|OE d}^ CM?xZ }hx| ޮ% ju.ʼn;J!#٦AHBt5 i:̺"46 ,\,|2}Z2zv1t8jrRtգ.'5j\bk7+)ԑ.i(>~AIX?Gt0irm/5ũ_/Y(u(^LN%S~aK+ +*B<'h2=mz3 gxEu-T5g(; 5Saxv%z/,z~׷_zKyo_;&E7ڠEՆa?bÅ\J8.pR$YJ_,"zHZcL{V|6-%f,~#-~+Nl%]q¬*.K0>70[2Al]d58OIv]5<9UnX t"XU1cx%wU$w0ܻ-'fc&rȳ_gCa'߮n5 S|v5J,f/|wWl?[1(9Ncs6 YsU.p9&>@ ߹߼L bw(؂6hgZܮVfRNMˣA_~2u;ME3+>ߵ.[k$wD~2_gF.NY81ovŻvޭ"uEhϯ_g-Rv[1niQ}M}R!W}Od> TjrO듔2WVGjB\Y\7a^n-V\[,M.F-RZY~|"ŒavɩmҠte%׍Ng_{d;7$WJDv4B.*f &7uja=%[K] BJËDrpg.;YGg;l8Ӽ}`$+һUںlel=DqRmɬhgv#LQ^\Py5c6-ҡSp`n'54YUڟf)Y^j\l Ⱥy},M Mp3ir$SPD=W4(qTW;BEwg  ۢV=t4Mĕo?Vp%) XU"5x$gQW^RG$TZFreyAM$Z)XH]H"L2Y"Y\ޗ+M%㘤ٷ E*(t$B Ś3[-rS) 0uR+VR2C7ԦJǀŰTDΔH]rgW3I-Xk\TK|\. 3tREdY&L%@R2MDKdk-qw1 Kב2c3cB1 ʂ7s@t I#JGgw f\O0S2ͭgP̺̆tp8EHaQ3 @w5$he *Plt6k'œ ψI1ZmX߁kPHK>uwi~6r!mNm\,H%jk!Vz'^ؠǟ2LBnv@VIY(]p)k))Ip%ш&[CR0rRhy:t_2jM0X*3JRȎ.qL{í38}^3V:9,HuQkU}GVbOTQޫ_ <nCH&*,F2JqXLV"\$r\w$v(Bcwfm eiKohtX#Ob-HΑEDG֊k{`9())t{$*)J<ȪT^J f`d0{NCA[p"XC^AA\`6$)8Ya,Gӗ.o"̓ Q pJʬ71{ODaL,L0R ]@P1+@pٱ{a}'-jd2rĕARrIRADeYVaB*_@)'}GZ+HE<$eìj@oL >lS2F0v[i`>o"JV%Ta,/ 阢Ω FkWac AV՝gv Qb[7kṷ23q:+DP&_-8FwLT)n}OSnǂ>PPkЬ.lEȹHY3hhƸ>)-? ]#|Œq9p7 JxKdTW >"rQcX$ȃJdp A(P`AH:PP{di'O H= jfLuUЈ ~⿁+I*&:6\''A"0F?$5^ѭ `\* pFJQA5YTu4~ 3a^Џ5KVzp `NitTR\0R+YNE$^&ÝuH` iW AVWµᖭ$G?mb#l-g o:jbahU<@`hac:bdJFP :̀|8=J'O=0%7ҡ7AcA!.m>A"VFuӥ&bm30+&!!0"p"'4\ &!K#֘:Sf["!w* "c#rWcL6{qd2׵\SLϲ|Yl99={|LҖN%b/Q=e3jnovUUvu63$ԁ -?JYPK$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCBP:$! uHCB|Bø u`Vu[7,qONPR\|Ah3YG]۷YM8dFWjT˗UrU X{G@VJ厗xUJr/GN7z#2%k yᙯ٫9~ZVf oUN< xvL\5 \lDƤ@&ٽݺSE5Ru#@y֊{dg:Mݰ|ėU ڔywJ}zVQo u'h]V*M/AWN׃pnS]xtw>R}&orы= oZGld3F?y |c\p-<?.x3_`CK^׮ngjc,?iڊ14N1s|V[LVuhn\b4Smbg?~SGH{X,-m5oٌ{lee_OQjE7Y OE3/,GW?bWibO-[3 Qy?v!v?gw~gӻ<4]-; Ybu?LsC?[$F!j)zeC{Y\me{8]X]|\fx4ۯ2Zs2\|ͳ0e:~>| h, e^WXPo5bqԾi:Žr<ԃY:V,dkܗ`%gĥژj}g,7ggچ|^w^..W]l&vf|3AfgiP۬/Fqpd⾴JN^ WV <Ee;m,wӕo͆6 xg}M 7>!-xY΋ FRJwh#fh~Z,n-{qx4C mvЍM^Uw8>krv9ou/r`RwPqN[[r:lf, S`simFhQ];TlD,ƱK>1O-x.0\ <ǜpR<*7`uѶv}7Wlcy%+.][~6)`/P.ᝇgWJ<9kV7=VLR%͋Gb҆u^Ϡf" 7;[ ?Ӻzߝ~d8K?m{Y7?]KҐWv{ ,^f^>~Vʬ>mAVO7gޣ |}lQG /|/U>m-lln$&Yc95_pqpTIYrO|JvY]:MUx>!X/~م׾23( \tTNěYږDc2a}t"KV nP#"jii_ 8zkx"q 5и~8 w_?hZpk9?[2 %Rfnpw~>u3O^t5\Մ<3Ql>V`Z%|HELrRIӉ/-cSx,{9ejSoe iO=X lM>>|υ^D-b '|]!kUU{S]c R`H:*knb#9%-GViNka2sKaЬG+SB]S;ڹgTLskƅ#ɬt_Ʊj i^L.]IӄHR|, C)L.?TGק\VtÝ3k^y;dEd a\2qtΎ^۸ Gn6Kg~|y4#q].[m+u2sK|6/6; 2 ':r${QjS8^[秉). tY.,[ -HxRWSdz,_S(r]U3:f4^ W"J'ŕ:-.Nj7n_o~x.?x vH1̽xo[YovkwsOO]nqjQ T},I% #Y̶<@t1 -[1gI;\5lSl4rZ *ͥ*9TP]Jlu,O٧zQ|? )1?ŤR)7/`(#NZ)_?/>i㷣YP.NSwk)ĒUD:8qh>a]5[e U*"O1HON{?{rwN'Oɳ  2O!/ &Zaw/CVEKPtYe^U(w˭pb!>  }}'mq#G&> I8Խ4 ~sse ~20U~N',V?{q俊$™~Xs6;Fp玲i,jv<{~$4DP8C#Y{%UGWq EOHUnGX_CxF얤1 ,()&։cbU SXj阷kR&99GI$e2e!:+rSyIzjS9J7;~@>/?1Ÿ ^1c10Vxb''M1 [2(Cj%3o:1ڙpnĂGH遴焢<%0O֢[pYL;DZ^SsoJxmK4 8CUC1ڮtʀ[;0Y[Q*,2ap5"!;Q%$9lEGʿyZ^EVP Z BiXqft cJr[dmhƖL}qA56 t,+S=<w%kѹZDNFa)pF3mJ2D16AbGŖ&2ړ[aZ;gA5@݃MTmTQhue>Na6[z*ꥏ&1˒fZ0[c@st&n#/Frի[ZLÆ~>HJEb}հOPۯjYK;O ~wwF؍(xՉt'*JhiA(Y y#y1l`\lpQk N)R-U;87Q}pXH VRJ1OD* s@60؈\Dk;\"]#%q.u3AQϕph n I%Q-X7`V\-Sx+2ECWxK)h[&f[U\/fwm\.YBjXb FgCXm`%/cx"f_ Rܖ |tq8^s>[kWzI3-hv5߱1«#/T+*Iyb433{?,V3$;'f<]w٠{g0x5crĘ(DfQ=N긜0.TZSqm}=};ԡar)(/ EeCxے+|)2a#[¡eE|olaxq&+"ۛ]>wӺ:~АTey&S~Bz\3҄HII6$A4E!7ufI̴&å١:3tUgvvnie`NTZ> a8:Ad"")F+jR&T@  I\up0&h2%Wկ_mjqc7Vނ6d޳vB@ .74b!"*&Lo@f@qzQ3?˚ 9I-#P?¹{;*!@DWQi=扣'*CJE]H%U ?N'ov>-Zwq?٢}7:2EA"o. #d0qbd MJ4 +ЯTQiՀ4f{=X#vGhPb x[]*JvNxcQh/w7 8J74iw4JkL& (ҨWEW{}DRk3˪6oD{Mա\Ths3^I9"T2?f7"U*ia&Hz:T 6MR3GiD8xu]4IM@}Fc<0<6xO:^?77SCQC^JĘ)M`"aرD;"XK N{xMYAѫi9*|ǴϣL1;ɸ/|B*7ԐmF=ˈbЈV;2^^. 4Zj&¸@ۗWMkꈴ4X]|/@)~s~2oY<' U:MkJ)ym]oW%P3 %pաڮ1.=l2uEt.>{zs4 嚦&WAb ̺H3$z2M䏧xc`}zH:!;t3EF3".^ǚ45sP-y8EŎ#شf&Nu9,7jC" Ӊ;#켹xkjxN[а,>`Xkncyz R ԡY5:H=EJ2:G[ #ʺs^8(ƽW%U86xX]#=ql|#/rЗiJ;5ծ6bCk%}#k:NmHb;MB_]1lv;>ϿjZ@:>pFA&2a^SD')FDy݀tz\hɄpz籲ULo8yl; ǤPPΡZ{h\ߍe>tCd9)Зyor5MfGIm!d2؏#LjG7.Q3OMVb|"d OT*(x|fJZC7\_AU^GX'pQEǵ< Wov|هHtnbD;o%tQ@IܥveJ|+rSyIzj⩍{7|;^49?,G-t2n tr[oy#Ckkan$%R0|"6pZ;GF-AOA'H`tupp7X3V}TV߭';'39aX`KԂS'4޹4 ƕm521.4vINQԲ>~'5dٌn݂] :4)`!=8$#`,HɈiG2#n$:|mq ͬG^: Ɨ'DALH"Mbai 5%8fO,%{9,w(? G?->E91+>56:o=g)^qjY4Yg y*`u:gj8'9+ԄT2c1H@-iF%sœ@QihoFjbp#1p^P¢>̬Ͽ*<..(}<2+\-D_]L$tQm?Ň"3d7FL6wϛ.7y>|y?ZGjy_F[΢#]` l-0E]|) eL/ ?QI2EHl`m@,RÅ竗d{ؐ;b?S䧋|_VvލhFy\;nr\}{\(T`!b]FO^McztfJ/w*> 5woWYbd3_̵m< ǫboz5GkJOW;w 3{tU3]MC)WN~X~Ft~:e`Ũgr9ŮvVvϺ|ɺY5MMKs8>m6Fa}/hVaJR߾oƙ/g719jtQ?QCmM -5Z6ߺr|y="Gk i@/JC#;mSlN%F E920v@{cRs/=9wHJʭPG]e6#!nvKRc1&0l ȧ(1oCF]ޑ$LrVr)\bqJcc B A@jThu@C=pOrOFۢ!;xr:LI1Űbiׁ:(#iDD3ⵤKR6nm Md7rǯ\ޅǚHvgcꦾ_W,]0`hzqSW1ԣU0`Zi ^c! 40AHq+eLzYپ: ߗɴ`3UL:d~(`B4a`8 aF%ɩ&ݛl|ݛtmc;~Z3O?ny]I?}ͳVKB GY>od_[+KYF[ȹ/bY7ǻo?>Q1|uwĖE|.UOePl?ږ?, (ʉQ)r;*[~Sޣk9PoGE /v> &Q!IoPXkbn8Ɔ.N~v;_-5քDŽ"כ{) źV",?I)l2W-uY&DJ0Lb! 05uzS$x7rƛ uXXA*}6jkm&g h%֗*ێbvhs`+aVÇtmf%&~|۰f}dH*o۞=N룳cF[Y4GPm$ J,s{e~#ᶡsthf' ‡'xݩNVa]^IhSiV0Kj`iQ7 %@oZ8~rUƮz'h -]|Jҳu86krKkc'Ax0yVv fN.I\O͜Jv/p-f+l(=_n/Mq]VS}Aܼ`ܬV:i˻wm6#0#^Av䙷xNt]e{[/L[9V|׭ZR\)Pdcu&AuС -Ax)dX^.AbY\"1i$`Yܪ}E&a̘ ,'ҳ T^EkVb"F309 %H5IX?G3ߦ;i+:t ,z@W.vA5l 4S㻛9u!ip SQ]1L[Dh$W{=i Dzn[ZMtCym>h=K}תtɵzt^]fg9 fVvW&/VrVO.`Ũ5IoJMrs֝lTxU0 Q仫EdV{1rL_z[vEMkς!)Mc›g` de[ dttV}W;M7=ZfmV8j~NgZI+3YqgޫU3H8$ry}z掯+ȭn1?NN]^'<>q#2ʫ1ݖ´2(7%JL%*^ƺځ}C،3 I(G# zRJ8^wFNڔ}hjBF%޸w͞~ϻ=o2 +LIl@æYwdj,y"’%:c)Fc{ާ?SSHIr< C$gK"*)N^L< bW%T }8eO..z8ai7w<}q 8 fO")("_If_?^&dpm* IMNƒ 2v .d&HqT4L&_9e?TԄ_' )?,~c*ک"EB<с$ӆƷ},`UînJٖLTrMϓ5䩮,I5ղLC0dN&Lйȓ'$o>UOXs0 *Jo+a"8D$<ܒ@H5#v,x@d݋q@ʩOspAqiT9z]߅W5eG9(kS9c!KY? EW >=3S|"'i9 ޽ ڛFgޡ~ үOqx=(>?v35^c}KP'c8|+~ˤx|YS\`b2 O]E>g5 '2!#c{0PqnN`2wBakćBAsh$A,p r< R;Y) L0۵O5 8@Gv OLƫCϨS^UnrLpDI STMX$&kR"fUƎ@Wt7,F3)x>snHLNDhH*Tvk5'jKb'Σ!$Y'}Y[\_G40M1Qvލ\2K+f{MT Pc.Fev@$ x@ nh;rlTKY#aܺBฎlyeG7,U⨮#ptb CSeqaH=YtF`%LȦ+X2mۓªZ/U(5*;.& sgT&*0X={3E+(|47,ʉ|pPrM LAurͫ@cE$WvɲXh JkRȆqx6m4!7wRxW .nb\Qp VKPaXQ~ |L!l'F.Ifdv f)$t\^X:رPLCLaZuWfUHeRv}g5U6LgAα+rx䖾4 )WD? ab[(d棸̜[ ==f9>4gQg"Vm3Pd(뵚#YKx3 8N$ǁd/9yYW`)Mx'Q۲d[2[nŁ68i 7eZBM_͇-@Mz\,OJ{͔?9\tL\CÆst/zlJkU-EW=&n\Ua7l{]3WxUxwޛTx$Ds}+ 랻?ݲ؁8R]CtR|4M^B &D<ŔLǔ-B\ɪbq,!"+˺ɪʶؖj[7Vpcw>]]Uc$ÂpKwdճ0 $Lv2ܬO㿀 8 o6(;d܆}a(%@L##?H$>hڬ#n?`W ijA6&`t-x7l)U⑍ ( 52O)6' @Ah0@/<=wSsxsلLMyg+̙kN 0cKvg>xu] #}JCyT%gt;ǏA (V`3N`lg! ).T aU1fvD(]< xT$N,F=f֏:stIJٟs\>2]c83[>MQ<z: XL/JB Li(B{Qq"8kz6M4 "Usi%bdU!`cci ?_s2_fk(n9zWJ-dd:[eVQ߶uQ(W/:;>;z>% Kt.ķ'o0^oG8½4ChfmM2 k ]|Ѽ<|qx|}trq|xuv/`LU_ݒɾ% @jpuZxD[J|ŗ{~vq纯|v^.O;?3['xzy.D 'a$ONTk1E,ժ/1od.QL 3E x73'YDYxYpD?'CE馬I4k|_vn /-'j>-)ZCEdf 4Yl*|U xg>U81⸨k u^\7[ѵnU* [YЌjbB,[Us'sIV#Re܊Sڎ[t)vlE w>dz]:}Cَ?l:yrC.?وItai\|mR_=zOj qc{0'ٻmW\vOw`ٲ%\ w7GݐN[;#" &,h4yif4B=q]6nZd@_ʡKZL]2' d{&Rj3]ґItƘeBYLx9+rasUr{鍅4wh:ȪWueL=a6epVזeohPj%N}UkTZ ПYП6p\ɚ%ft1^&=nčS0 "CW .(P$xhH ˜m(DnꁾEd` 4ҼN/v[dk[jQ־D>ṕr#MFبUNt飓J~?7Z[[Eكu:s[u:4F +,e$*,|I*_B[ծ|}є_3u0]mzY} \YƝ^NnH&CLpj{R-0>W6FVɊF`JnczۧdfN$[R뻫6qʝl.EvmmPJtL3~G.3|arآVgo&w4u8_(iE~Dȣ m+${0m1ˉu3NhMw0mxl ic%-K\cs&Ck8o VS4P TԢ:MmpzpI݄>AIRDf+yJƟ mf 6 7o]"Qc˯{>~X?~{?26^T?A=ftvZ/Yoхk%F7F+VSQq[`ku ɈAySy0՟e|-Z0 ^\B37Yڸ؁Oy+ū @@|)z-4*zHK/M ڳST '|(3Cbٷ;{p@cE^738 {jhFk,,ьN]\6*PWvfh((XU.TPsB6%4!I ]xf[6/wb@j)?ʸm9&~A ZnA ق e_>mK8DJN>ya`D6*8@c$cWyxY8UMf⁤ԧ‰f܌]Ikd ]gxp*n@7M>3ݯ.f7?Gw֝羺b\`O? j|Cdfy3/|4g.@^c[<:B;t-ƀ\븑m"T#v,^zKd.K&5!=\PUV\]XoNu}ٳ D2NRhMi簊:u4n6(sB+Mf>HC?@gR e[tX8#d/gVZb1Xbw IlۖmYIlDkC B`"Y<3 {ON8s; FpH0[8$w"l=J/=O8g+ ڈ+n6(H(He_pz)]\ lY6Ts##WQ˞Fe:*))Uѩ'Klkl[ڈ+$&"ZYuqT2WQ\y6SoSk4woT2BlUhhunlXR(Zjd&qnV•0)OuMgi} t"\IEN ]ˣ$\<"}[;u`Ґ8 r!<B $39 Nb,Ns8'hxȒDږSnÉL2.>=W"R%Cǎ%"/O).GD4Z;Y$^d^Ӝ\ߦI8p(9ߵ8*]8xO:KȷXY I^g7fK!rf:1htYyһdzȞLtZ97/ 0%У <_zJLzx|ϵ?[+"<™{nLF_&/ї#A~O藐~Iade22}LF_&/ї!e-X|챁V'1]]R ;A`+湊s\xh\eY4f! B-x+dIP lPd^PdP B<¡IaeRjy!s 2 2?2y:eP䉡@-Z&LVj(22)LLLLfZ&IaIRjy!\JA-Z& Z&OLVj 2+ەv])L C-E@-A-ՄZ&3@-/xj\[Z&KZ&O LVjZ&C-YI!eRB-P%r \B-P Aw?$cEAXqM0פ-qߑ$LD"K1eْ -78r-C) ?pˆT3E1̖ za3e6_Ey9nD0>#Pۋ{QK -.Dd=w\1JOeִ..Zci9[0fg ׫[k-3`[6wg݃*y.j3 .k[Փw!sxv;-w:QF4j'vO9sJ ?w AvyE"VϓF.!8>p[k=s)rSJ(幑-p+%J@"|B"vdI鳈8$"  <Hkvxwᬺ{FSXd:G-ȾcGL $z1W숡z14vpG̊'Jĝu^e`mKU$hJde=w1Ǖ!1d _8.6b]5 =x~ͬ%k5ci])saZ||66hBNcX#3n<:A-(AcZuI>!(-۬Ł(C \(#)cɨzDXv3Uyɥ% <u RPPe{vR' ! #_aܳN`c .ü_Kg#/pmgz^›f#q^g/i.Y.:H8~^MfN螭)tjܧ>}ڸRmT\O:_/K 0#[&u,i̥fȹ-\TXn5jzR탚>Z'R%Mi&q6xҐ:ycdJ[BL-ƒŀ.Z΅O%խ?~qχ$V,UT-7su*ǔT]js@tjĈiF9/MљTav#Y[)#dJ@T=ONCC[cY_:7جdFuհr[TJl15lyStqOTnmqK]i)AҏZ}."|zx^C&Jr#u=d1J7*l4z}K ASbݽ=kɫ'@(T:PnX-zW30}hWI] ITKfKIQYi 2us0a3x"&w^9#s,UR0[7 -LX6OweY΢Y?{8n!}8%HdC{{x@_ ~ͺةnol'xݓhjD͐sDLzPxtcor q_=7{kI֎;Fr$ rVEf ?~e}A!bu<ħ,n|][r͖椾~⹷8(_o%^.)KlNK/}<ok^ю3<W6|[g;><(^ƼK;|Y96P^H`Ys\UZ yDz]{ʲщAi+n$׊ TB%1'p=?̴(@'{ 4hcpH\ hK,un#:q|2A>I[5sl /gBhgmYVR&-ryp LEFx b Q a`cxa(B_++j%V"K;3y >]O[:,ՅIu(/$L<؎)|Öd#b~{^֮)EgYݺweJoNX(L[ +8,Q)B 6F1/mH_u1ғp\a*rQpje*M&ǘElZ<NbkUiAd<,cm;?[,xaLۂe5хe5ĪBNa5BHReiƚˤϢ!-i] IbGkowXYĽ>Uq"G F8]^^nn//XVr̹l*h jM2#"yw;խyf Vr% w7E anԗ%r1K"PwA;t+PǓ$OI7])CYc70DF8GQ +yzxf(|j6?yɮvkYKHx0xS0,/9$_48[Be?}~dLMx34 \R Dz@`ad0FM!*ү5H%ІW-ASzUum,3Oo¼2s&$xu@"U1' x港GVZo.èWxן+ )Dx ec\g1[S0Ndl 6`j mkcP GE ,0eG5* $۩Yj\0R˼yȕTQRgIއw}_("IxbO0 Ym˄?ΪS[v ϑ m/*~ \ӻO\Zq|am~R ClV.S`%10AŅ%m+́տ#4:w5 @(~tiXQ{诖uUݶZ_AwR 8&yh^RU\/ûy@󡚻m MFjndoU]u+ e3zς<mw &gy'3pkn裤%ңT*P8Qyk JyD BVEt+m$v Yv¿ɉo,/okBT:|,~B΄i]`#\1G$DN)bvB們qN&,2pyi㘏B1Ur_@ QUȁC9On6=1z?S~+Hq;]V*J d~m?=?Ըo Սפ9g}<bbtm'$֎DYQ;!,k%Ka44t4͵nQKj߰y  ?BȞV>`u]|:6,r/i[ nP[.Yaz?[.R&XP^>7HT/'jzޡh6{F6Vp  , r=5;n]ηtmn[Vi",CuAr\?0vh*--A>IB_OE_5k\_l^xv|('ipSइ+V愕lM5ڋsCjÜBޞB.࠹\*Y R%OK^`*X(s*1ε>/&%W~֧\D=drw=B$$BgKi%bi OCeNWsIe?<_28wN9W2_$9)&Ĕ:\l{Lk%L\eoGg$yy+m(*[ uA0/Z^H=E$/tT8 }ka* Yg%WF )TXL1(k`L!S.(m=u\ Ēl,7dcR`JN:F XN  I[mu+1'Np( 5ediii'ma0-UhzU'|ٌ1^oMDvf2-ھ0_z+gÙȩ'*Lڜ)+pTK9-. =QނDO) w]'ҝ)ѓTX($Eɹ%T[e(= TJ0\8LκP UR+=8{඄y56Y.V4``CiQF/bwZ,TAFI<`fLid8={w0HP$}Φjd\SĞgo]Ue*xxqedQ.>8z7n #@xϫ?O7X~w |vQkx<dԒH[,r36~#1s~`8z!A\C0>+CE!Kr k"|^ I/Y 3Ig'/U?^Z?DW?~;tϰEW/[Wݻokr!FѪjbAVMhG1=1Ɵ/ 4ubBjc t4[vME>*e,6Cwkn*T:Sy֝o*w3-9p6l^i]ٻI?{B΁LY ;1AmûpfyAKՙĆr`Ư~p:R BpG5aRgK.1X_Oܥ~OZA+m'\Jq30(YH Zig`ƍEd"_gIwt.+?ڹ ѭ  MWR |XkIv&Y=-2{[]JVu~Y%b5h>Ts7kd4Kx@OYMTnkOw< rO?9'd7nxO.xOxxOw<';xxOw<';xxqHw<';x_C y)J$x%Ov<ю'T,ю'D;VyL <1y <1'Kˉ/$֗,[%\x~^ t(GҡJ`z|qS6b'KUYQe.U7maQLA KLeu^@n5:GAic}qE"a2o诹];NKV̳|KVh؁T.u3fM͝ﺻK]X& y:uO,U-ow1XGxzsE:'i/REG9B Ryí1:yG$ΐ\=~󦲝SsOş^޳(++nŀ^3#-B Hڤ\zOXn`lac0ju[03+l28BcnE"42R(@%&n1,(~*{ޮ\ z}hV=ݖRu3Nϖ:]:ܙ#'NΰT+P,XggcKϝT rYICE cHYTLğo"FԟGϸ/*{0^eE/kלc3*?OE˛2bu_|bBư*ww-1CFb(Kq(-~4aQ6˗"DHfÄ;2eȍHAJ\F "&Og{(„Hj*5E11;b"M ιvD'I r d/cW"K,t=B| w v%F՚n9^y+ݲ׫ ̄0P0N;#~,@gX)%*+PVf=E{<\pD8ӡuױGΛjJ1tvsbD2 Cn,+8}|*W.VjfZ*"P PDȋ'x7N.\@r-B *XsJa4W-/7<(o$sUDUo'uZY!kAތ^GƯԽ:"5iAlvRO=C\PNӖ'¼5\-/`증p^Mdە8{t9jEPF,/ߵ>PT-(ւ|>M|]X4\CO;o|SA+ ! QxxqDEk }<ÙI $($ǹlO7C>HDp\E1LK\TBHq"9 &dL*`clly$N8!:Sh[&?<ϴ>c||91 qD$+kL(f0r/+"AyD b% bB,1L66D"$V& [p^3>Rr{H:5w:zItgwq;jt}´"`7YŇs>;[sHxFϯHL*+)XkY(m9PI!ZP7{@ il4IbKN$O 1`<81<$,&wr7}1'(VRY?5NEނ}_k("eHCSELDFI5:ɫ9 )U"$]kiD2#2AqTG#(ǰ8Y#L9JZd2,96#1$,j8l5m{q)[SoT4$#Xs~K{DNm7Hslǂ$ ^rOIW̐j.M~w&oxu(bkdmg޹0c!iS^YfUr-<_My,LN FdDgǃ+g}p% eº"z1?JcM aeЖ )ݨ!jӇeQ'X65=U-jВ(Xֱ`8-vU&1zY61wo/6{ݳ`kX9 `E嚍ӳT p;(Rk1ϭ3#i[ =";؞7x\O瓺!anV(Ǣ_VBt6q6M애{9Icc24yȰ# _ҎM2_.{x i1QJj~C%N,f˼id'_VI3>` K 6H: LI8J;.^# t8薡@!6Z@7g$?׷޼7on1[㷛j& @e0#7ukok] QEyB\WdGs:l$p?lm+%xƸF&LWԹ!/F0b*et`7^11E@22jHDqN#jrvӴ ?p6["ô%fNpn"0T9&vEHddcK0C.T0YoC_N"嬈ÔrLR8brI1ke R+Erj!PT 9[Qk_P5.#džCɃe s*dOEQbƘfan.*a\$Ԅ"X9'v <\YoC%GwB}8OHfdI ̗b t>A iJDݑahpRxk<1Aq6]O Y(]ي߀z#qH8!HJ*"Ubb<~02RSNh\өKv+vLw[an$^P^ǼtS]9^}XtuBO!rjd'Y" ) T;P4.)30,Uj,!z/n\KR'Vds]Smfۥ"4^DhWK}D\f)VĤ1ftLam"IȰa`ɗW/eÎ:lwҋ/Y7S) #[5# bDPFbdJN-q) |e㌻li|.y9FC6]@iXPgE2gn(DC 908Bbi) -S`9 wV$!$C9"xi0[rtw-Y/'K#pј g%؃%Cg (#DY,hH\ω؜LMĨ0 eqнLG2a$1Y/sO< ykǶd=gkȝ\⑺X. 4Gth^.Ӣc?f*WJLJ/V-'X50s!]&*J[@'Iž؁'y;CIkSa-13ABk޹ܧ\Dk,U__)WwlͶ $`5$Wbep܊aEhdPfK&cdML0Z΂P]ISa;BvNNl3 ^7c/ htSp}w\}9$G޿ :>Te3 I636rLp~eLݫ+pr_WfmmxY\ՇkpmWF4:M?X}.$ʋើ֬ Xoa/x1p7Zt3y;B4pn1˗4Y &-ͬvM|7w09!w.\0VPA|Yv/~:6KW7aqwЙ`U~VS _puuJ{فǣ*1]˧mtjקA2Ȗ:Z۽ZpuQүտZ濿oyu0Ed&py!e|duF_j}jt#vh@wws{bqr?nT=W a(M[FEa ysE"<5ӹQH\ 6xL~73'_ {!y{WreSsl.bfgY{HbBUDHDfD%5I;1h5gt̵q1op(X(:?>?ۮo9:V7ֆ^ZW4lYW@Юijp}qT3^puv;Xx~y zOUqn|s4hA(alĽG^_ nz(Si{'IM$Ȃ|Ny棥q|NT oNS4i|]/I8;1$át#E{NZiӐ\1cdag.sJׂe,eCCπ_g/,;~o t7Ns (08Vm۞5 "L_)Yz2/,fG69.:XeBa@OTC}UU)ՑJE,Ta^{簖+_ 㝃.ʯŷZT{]?<:w6,=b`sJckć"VsɌE` #,%!X"bw7G7y=nwɻ@X+9#qű(2Z*%0c,JqH< ov=Խ~t v@H! MheX1(f6bRõE81)![AߩugXyqJ!Z97@ D[$y'1518bI>Sr?P! AT)B33ap DuH'Vp}q H ƈ3H)+͙m]{oƲ*BqG"uhZا-D\I~gIIdRñp93bIr0I%BrOcP}''Gok"9r!c&, a2A9AQN[t52GF YE~=<jQh4*&3偡d:d 1M˸_Hu0.,Rg8cb3XfnhX$,xGI4 G'ߞÂxp4Zr?hh ?>9{&Q\zvxU𯯁ߏp܏Z#:9<==e& I [X,`Ŋ do<,,jP0HR"Xǎdó79cVR& -"zVb wԱJ(s[Nz]w?vrpXLojN듟v'ώO;{s>_T/6S$L;dQ+08@Tq 74j u苍c_H|{rr:z P8V+%qqPk܃'>Q Tt H\+@5E"W]O S4u _bBS;jwuϾNrjjQWi.v 4,D.ݙ_Z!JTףȊ!NɣSZ`iZ2ʞF]Gxf芬H1\A+ 33*ծDSZ4%+)( `<}/]׻n}O߽1gՁ \o@^Vuo v> :4yc0O-ZQSw!$ <%$/Gد3{8u!Meڕ+ӒM+ӒT%ۨLK6LKVl2-neZʴ*Ӓ *Ӓz+ӒmW%.JXl2-٬2-veZʴdiɶ*>S[2-yPeZTm*6iʴMeڦ2me&! sKZ6irh]&@ᩛ\(aN&dV6(7p~3:jqwMON_N_q ra')}ͱ_s s/Yoߦy _$ ]0fr|ƥ/tZ}b; ?5}G\{Fn3?נԛnëq2t-˶|<ŋ)g+tY$WW>f}rbLșH_,Dz )(%( 26a~ hk]),`Ԏ`Qti-ɂq|rj>}3(< O_|ߺON)pi?#X# XqmPH* h`vΓ:zBS&]I'kYWye@mgfV|Λ瀺:gcLz{`#m;) _L}4)N?][#YB#0/])"P`S*NR^zZ9i{sD7! 6}F^]j}BQ>&2n_i>):һ uoi uki^pK CL"pC0&Bl ~sl H'D(L!2 ,%& }t6A:ߏfB}E3u.MK4pfC83fpO^]YM<0h8ndi"H(jdY+ZjQvuAbxЎMY^b\,F'en.#g̐K@ir1Ae^軃"IՏuVbJ4 Te9)BLU&Yy\pb%M* VsmNlFkx/l ޴ڸ:ޛ@$z 2- aR~+Ky# ^̫O`MP{='Cfz3b 6sΕ+J Q`ؒYc1j6E[EBYgLƐ)k,pcof(sK T"9cҷE ^aLl 603VV 'p뷳(轴X*,#QɩD ƵGAiP4UTY !*E:RbXS&aJ;fpk>%KȳJT,³|^RLkNK^; cR1U MaXDmV$Hd,gAܺ3T5Pl5y#rm^6&IZƑ*A7].;6{qȣ}l$MDqz] )( \Η^lto(.:mqX,.@PN&EƜ\&Gd K`,>Nf}[ .ǨW;|B`*$J6'm6S l Ne[ %=w808?,~< vϫcFf gexPQC{Nj]ut .S_Q(1Z)ϣS.ȣ7(H&Oǒ"C14Fc{Gƚ0E!yd 4 B1ncBzfm8,FA$Eцbb4v{d5HXKu%5Û ٷrZ1RO8FЊnĂ{ՑiFiN`LE6\[l&+fV6z}Dء1A1FLGCG3^]t^CNrHMf7^9R6v\yHPsKDˋ(Ӕc8"I"8D aEB ):RX,Ԛ! v/ש_]zCcP8mGQFdWIu aD0)4!sE)C!*r&FkgE6-* JYBݍMTjQBn=!{C8qt2b!٠4.1bҸw=2d7i4.LVĵ\EF.rN_yao-U|"RXr؇|k/qƨ/gRV$d[@jq]Ocgp0`@,: vA5ڧ|v$JMLryå @3@cvV9v(m+os)]M#7Rnd =!"}HI Q$Rb%0l\-gygW<.W;p$тIPMF,x~"F.\ļ9˺ bj⑅b/X&r^]*Xzիe'-ˈZkI6rV'նdz~p#ܧ0lsrs.Fӭw/OCod0\$d70- *HFI6^wh,st &F L3XQ='3hWQ)aeP]c,h̻XwJSu"tҠ̍ؾ_Rmq4 oM|/@.עfoqܔZ~?gI!/O!gk_t~_f|l:}R$!`n9"eO~O'\iMqr ?{W8`X(XL/v;h4hi-KnQ.k}"IʖdQMɢU"S`2/"##X{P t#Elz90T\?lJeo\k~2(:o;cƌL5{-#chnDtںq{Nw=t-UaHjp5jxT>s>`.0 ,- T)J낏[XvvtƳd&M_\QD*o)CT3Uh9Ŗ} TtJF(JH11N9Жx93(v(K[OT%U߬qӖk?_z@aFhXβ(PQk %I/P!`V0/20EJGt`ZD"rRuV|Ϭ@Y1D 3@͎ r[ B{Di:s"e,10BB23uS/Il΂Ksg H\ׅGeI˒vHI뵄@(du֙U"zmԫ{5}Vr_/@uOOt W ՗@Ի:wO? ;tFү^AATyX9(BQ: '%&'&WiqjW7&}ԍ} Dn$.6+4gݨ\vy14Gwxxsֵ]vyp5d|7|$[&\WXxTsk>t6Ou!?)ݟʞ+?0 ;R:w+xj:Z:g򫓬 )`Bea<:靳#?GNN#}P UN߼lRa7p3Kꑺ5폺J2uURfuuJHΎyם5ֽQW \iu q@,3TWQğJM/cYszr g,[XFd|+ +"bN"Rښ^ϰR!d`qol"Wƅj5Ƨd4cU*ա푺J`B '▨%UfLxJ+Jkuj6QKĩD%Y]}uEw|wWW{,0}guڏCog\ {+k_=Ns`4JTy~pgebKY|x\^uŶRUÝ1W$k4 RIUTK۬5&]27R_+xrzs$𖍥By'(pÞFP%FG(V*/ָtWrF,Bq@$㑅H>HԔ1тFQ4`pHR@Mϫ-ۏ{ /S]uh5Z`(3*h2:ڀmT Aa@ڢC_̲D&Q2Fl)|~EuvxEXti)Fc~݁%7iQh 5(e:;~k]A9e?ރůkVWeGoGx>sL(QmGs+:2,"1rbrX+B"I_={flkԛף0wc_etutW8džhֻ %`ʁO? N9)3Ym|f&5?Kr<9XԴ-㺸7cgp0`@.,Fʠ2P{#- ZGK! 3jg٣sWL#,8ZI}Fݠŗ_H&-}p[JYX-dS 5˶vGbB:Gb[ WKHV>tQ: "+S$3AMnI\ULrZy{=̔D9 NF# ƍ*ʕ ԄLgR25W?r~G>"VxgCV''ZL;gжnԹwqͰ:W5vX;/yƧwfAersr%T8 yS#SD2e )XZYh6t!7gYe{'Wi I-z;Eujy_JYgl3,k$3 f:Kh\]`e8P<LeһZ.N2ۈˤv v;00DvdmqIA/EZ~ϲ;%-Ϛn6uP78\BX­o]OR3-L osR,o-s.9ܓ`cDP3$q ԌOdLcn̘VʚMTYM~/ \f3:ɿ,wA.9rDx6XnFic%Ϲ%W2hzwNXXT@+H@[^# ,+;[Ru%_. x$ )[X1c@^ˈiDk45[!-n\^c+4;yKm;a~O> KKB9UaaR"V3lX/]/M_\QD*o)CT3Uh9Ŗ} TtJF(JH11N9Жx93(v(K[OT%U߬qӖk?_z@aFhXβ3(PQk %I/P!`V0/20EJGt`ZD"rRuV|Ϭ@Y1D 3@͎ r[ B{Di:"e,10BB23uS/IN Y$53u!c-,iYҲRz-a-YujzUv3*^ݾo+!PQjMU/0\,RSX}WNFV :mWwb5٠}%-9Qx%u-&˨HB1 &D(fqJHA6Iz|HEweX,ZٿRҩk؊۞Ʈsr=M & d$^% qJ`6r'4R( $T&,`[r>)+~X*^֕. &^ txyAn5X)Гo#=jC'Ziδ'讫a9>^VKLQ~uud W< 2'I͉Jt&'>P hRN}{ԫ!cR8`hU,m=()a~`n^Xxp{a[*=VMZV?T\+P]AuE(Ai (R'Px/g J=ԧ _a^GE:|Քqpd&<8?L@n~L6r:woV9x- ĺL _@VK:Wv:S慧_XbK}<5 ̮s>#| ݝ=\Qf7AuwDn$ +4g ݨ\i0c]ԣ;[|Qgr.Niy_<7t20 ;[f-{.xK.7s@yq,Н;5 ]y:S 1NJnh%"@ F/pJÃ&쮯qwu(N.VR mꖾfYAlJa^iGjd>9HysQ '>clˑ[J_TK`{@͟9 v~+҆hl %`ʁUnۧ`I}'=]M(k{f@4y"Kyf%8J>a! 4Jw'?G7\Oʥ _Z{}$tHZ&ꚖR[wm~\qyfTi9BiQ:Z.R=|8[|霭.%p*r{)ByQqk dMK- uͰ( Jgxtt~|Xlk[edsNku]_b 帛c`>ze.X &vêJ9tI8e~~ X޹Ey+\R]LFtqաK3gNEg aVdYda D6cL0q"K8MIl%*7vө%{ y{d&:]0WGV1qE0o.FzNEZɮ_(/>%St=/*Z_ wҀ QJҕ"Y]`n7tp!0=] ]i¸pT-(8'_Nth:]!JK{:@bw%BC+DUOWHWSeGtUBwiL=]] 76yWue]$]kzDWXY ]\ͼ+D:OWKHWJ0˥GtqAO0h-:]J $]i)Еƫ/thyw=] ]I eDBWu轫C++(6ҟcW{ {؝%Еlz9 F/o_qyd !tj;c/-JtuWSńRyCWWR_ uО kJ+݉]+Dz+\ BLBWVT+>6ğ+˼YBNWR۞{}ܮt9BWVu~ QvT{OWCWFF2'jvL'kiW̋ln\m2LiUGL69 Ċ4hi;C143$֑y|n<}FvT+a ra)f'`iT,r@f׾ M#*b(eha$Zng"^nszx[KK񽡅6J-_ʴ/֯=usM[Qիġcɵvi OˏJY= l StYE:A NM˃)&.Юp6SB*\l <?/|.f;8˿d1x ɗ2lì[L>g h5,]ZZnVK+ozg0'rp[KO̝Gq0/IVq 4e8 zXT(SݩM;I0sIg vߩWvUp,k̵Vrݱe{U@wjl=0C࿊$! I1޸t?^vlM>Jǃי*{3ʏac͔$vov| 9m ƾ#U_GTtV3YDRMrpMuLBPXM9ۻW-R;7MQzi)dRI4H6t{˧}ڶjg1;N-LUMiTDUe[fu^=I 5a'4-dTGmG:ٍ$D+\͡>dz--"lɧ:F`&:H6βD$t45;T#=>ww٧M{}0Ĉ {}}.ճϕpa9i M9"TMoDsrrQ)ȣqOִD v)؝b4`X} &cpΈFn1 bs@ V3G~1Npހ2ςp:ߎpr0üDvA:v6=؇ ]!\whOWtu8t%!zDW ]!\F}+D+H QAҕ3A,=vp]+DXOWHW mGtf%+C2#Zs* ]!Z Қ,0 3&+Ϲ+D+M Q~Еnz= LHV}G>VƺPz =]UJGt3dtut8yDW!2ѾteT{:\X=+%Bv=] ] A6B RƺNW +֮fr+DXOWHWxɻT1o jo+@:]!ʮe_AJ73v-;MetB u k.D{tvZV}zs%ld\ pbƁko?wzAx3Tt*9.inQp9|SiT$gˊ{ṗi 72L, cnPL9z?/kp}_9?@Uǵ_[,nϓ47KKͫVn~*F?\HYẇ-l,ހͤѢeWvgء]~>Fϖzu 7pu⏷m-i;[{@pyyeXqspZaχV˒IVຝTW z(UOޔ7{<75]Ǔd~EL\䜒&- ǂ8M8TGYF,QGGVJL)~Yé|ϊQCi 17\ᤜd|DbƙZI%c1,rYZpD5n\NAT(B=Ðˠ[I,_k&{j=(gS)]IXl?9*ڎa]L'J7E >Bis\ ɘKY]-*]yCg-l1Vbo_dZu֊t5ȍI ,@N4jK2ꫫG !t,V7WTZY_ 0,ͲB(Bpa$5_i%70]_:ha-lv2<Rq8+LxT5pᒫ,~v4㏗RǕBQDŜ0/T9&-k$Wˋ7H`YoC63*{G)J?gHH0Ά7*0 I5Kj4.I߫o\wJWW$;qבG]$G; (F"rtΤSwd"\Q1%xq:ܛVxuE-6/ iatzr5j[;-GEa0.`y6;d_!_mviJE2%oM+5<6cM鸆q8r;!{466 _ZqKHΡT9L~Ζi:gצ; bR̉0_{WsK|nEj"?7PHڑ_?E0~kYdO > LV;b>f7pՎJQY7jݳ A^^"a*#i`܅vUY# N)wZi[J?g?\}}7 7˷}XsSd)7%U40g~ h8<"s.iDBUR=boP\O?w]߾>bNOZ'x/`:!чӃ7547*7ԓ >.ǨGϫj"8]R(u~Y۝l N,me4_2{,Ci1UJrEg.Rh1F5XDGoP($L,P vKOT\ QVû*/4Q$DOޔsL(QmGs+:2,"1rbrX+?B"k*^"SцZEo<5 7b|j`*HǼ22Zl0@ƘfC.ljO<~ȉR4gKrY;ٚ*7"ēt&n&2ZV  7kOOPn.Ց#ڧNC Çӭtj!vC]H]Ho6\]m bl@{?jYb@3h_OrٖZzR+,84q0͓ĤK~ 6M(Ęww3YiU7$ب.%䀬R5.<Lԧ_1",DDꥦ0+Cʘtcϧ=Sd//GsޝrFsh[muabss5>,$?ߝDž,_ịUh4BkM̨S) F刊`*0 m!ӄ,UfD&QXWlvO^o_TVj(BOg,NMȥczvEjr˵Ϲʩ;rXtӽhyQ@R(DT)%0H@Ц1 R,$jtmw:GUrx; ճQFOR0ԫ$:Q ̆0"d8M#ebpl *U:J@:lD`hc< zkn2Pwg4U 6^ȹenrM X̵Q`1sFqx..֚*,LŇY>(,'rzޡ٠r{)l,ٓ4|dܠ ц ޳[:6~7VMTQvǾ0t9 Xv74 DDC_E_gV[7/<(-CQEqx|| -G)3qn9 i>v].}r٨ܧ F/`F mə^`l>ZFba({YKQg瞓!D!e!x0k5f,`k1hFs+%֍jsyycuFReNX,_vGE38ҒPNbXXH&v  xt{9dM_́(`qAm[eO1ZLy/v~[2U)sh-JCR>RL@SNq9%bdF/cLoD_:I!Yq|îWGm%w!|^%n8$s#&(\,Lm/_N҉eo|+L*ƞpt z|?A W0r·oF-{djƇQEX D4JfeYkY7(oN'\(Þp!-h+LLrܷmT<`ܛWp"lu}.\qɝK=8`wiZB_EO`*jdJuAev|]<>ʀhFhx jsMWv| V^ &Y]GϫVQW{Q)8,uPWSW}IQU"XG]%r5nj&wt "-RW@R5 O{T(ϳu2IW{}J=yzUo/o)[ؚ<],^~-cV<;y-@d?{:4yYE03~z9 +Tݗ{1w)%xtRG^֎kmȲBh Ɋvjg1ErHcGпn%=#€E6ݧn{ŪnlM7$xU̮n\COW\4MNapPm:W 0ጜDŽʔdAY2K eŃ=Cמx~nׯڵMf[.)C;[sInjΝъͦn-dꕠ) <ܚNfn˾=~2wE v/ӂa4s2_x&wX=e0ZǣfMjWa#vfZow}آ̉(vB(^<2 }ޏnͯ^f'(zv4e/<3n'>eb=^lyXgjg֠BNӾwb&ຓ6;WE(]<Lje;'&Nl%grE.7!v;{׻{K퇣qͻ׏u#4+wqvJ lva_FNۮ9?~}tߟ>{ӇOv_\W^Vn)wX޿ ǯQt,q錇)/(^Z̯V[5 ~k֟~^^30_l|ǯr}pK8 !O0V ΁x^{} S_>^UA)UwzR5RMN>+4X{^7|<((S* (ɐ:ߟ:pW7tS#_|w0BΛ{{ЖG)?=VIna"7eX?R}: &>٩לr%o;W[㪫m48VÊ߻8x]yeppvYFނ3(tu=ow=p=-נ+e䔯]F ]֮ ]`UAņ ]qkDW]*p\*h+4Tnҕ2FtU[ow-NWݡ+I9b kCW._*hv.9]v"]u`U+ɺUAml.ҕ&T.IW[c׏~iLp=0T=Mw =&&ƮƓnϧȆ)|hi- [7j6ߪ16%= {F 5_]efmM6 ?k4lLXٲg1VjXapͧs_IsYն.vYr L}uԦgn61ͼmԒۙ%ĶR1dpFt*:y ڝIh;Q"[t, g^'sT02\6-gW4@)䦢 4K`zt-7V:yhD,-e`5޵K>J jaކCZ"kDW1**p rs+(]S1/UWp0OWo}}v=Z &G&NvIx?=v۷S(:sj7g#4v[IogUo59>{"4씪m?e&eF Bʫ4|ƍ\= FpZq7?޽j~ gdn Н|n; gc˞c$?u:LH/q^ow6iOzm="e<'VݚWT:_9Yu-UT6jnyl*  ʋ e^, qx;K|ƦsfK:stN Xv4biVъd-4I΢6\ H.ey&[讥oזy5h0!us"0ɤvG"gHE$*}GMd Rrh50#*\JPfڔ"z (C3ՙYJ{´[u9 e_Y=JgI(%ZDed*I +.ELµ8t_ BTň) o2( pXC *$w(*npfahLCs!GvڐYs]~%\0obAc*{Y"K he ŜfVh1)0P*\%VG)G%V,In!M=җM&F( 9(A=Xz'^ؠ_mD KJ٫f#Z*6M> Ţ .e-%ϑTm5$+˖z F@}ɨ5`̰*YJ;ą3C##3$}[3qMh*r`DHBWxAx6sTLLZMQCDKhxCF6d2n.smczb#XԲ̐B#Tv:&diF޶`$(#N#_z| EM`,OS@I92(hh@spq(C)LXJ@ '|**W73 ӐctpmV&ܪ"+ɺR6( $6+uԂ{I%2O&0"i+`HJYlc4< `;)Upٱ Ek*f F,#'<]< 0K " f ʄ@)3 CQA  R}"=&ë21GHlPcɂ31drij=O6B AvY.)bz@yr3.R11J DaHW CH0PYx&TDj*@=1!3{Ax`;q b"0A$C%c@IaDTڡJ@$`%. W22*J"AB- (SaN/ D] ̑+hR mm)J6􅲎W1£ Ռ˪RXkbKrFW JHOSʕ#qDO[RY))R QqPBqJY3e"шRk4,3IkxP>&&Qi\Ai7幤 ܊~I"e, ġJ@QAؤI(D) !pEYLwlnLh;{:Fy-8W|-yU&Phl,X ·O T^jPM0d,p+ NlXJ Qdt3XQ|E J )z) >@E&rZȼ, P>h.g"%8Lt@I(^x4癃d5qnny(G AA@8CEUWd!T?># y0,Y*N$m@]N7e2}y.0{==93YiʣV'mħ)q]SRlުF7ӼFijKJ)JϞ˸|K, zLhԞM\5o`فe> uP1:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXB 06 u(q#mPvԗ^9P,yBσa uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBPgi: MB=iPpm5B(BPпMc <-P:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a unUs;]-Թ`܇m>Ph=A:7DK%1oϲPPakYB{`6.Թ:jP;Nۄ:wQ5,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:/bww92rvh7Kx` ӲmM"~ktE@K+J`]c]jztCo]`Oo ]@o ]k.=]J1ҕi"5ZDW_k pmm ] (%;^~ʲ,^ӢtE_o봅,;]whw,JW'Db9t4 dkՠK;˟\hӲq0Ǘ/hvN$܃ 789 ټ\~P|A|P\#cS=B?.R~ 1܄<ߚߏZO\' >5yZ7`.Fk>e 'O8@>%3l!}E'3qYo ɦ3-``L ~Ve:m lYf:(,qt_B5:l -,ȤInӾذy"9Pz2wpNl-٦~ nˏqEE7|)hZs0|-A).8ݗ}ئKY>$ !׀Zq;?2:d~Ec>i ILɌ e#T<:.rGxOÁƣnp`dY۫EǑd]WKQ-Mߠ )vl_dގgcb{CVdtB)q #䴺I8+M32H4uЩ9r)/B25q#ɴVr9T;BWD3F0S<eecpPv^/ݕtq@W6c\jaX?voPWoomQTQ ЅH>VI%Fδ>UBIzW~8)ga,qqs =v}€WW1}Uc%uƟWAQ}!'>A3Ba;$gXr=zNlht.dڇta}NsNN)e"! ly~jkHxϕUy7eJz$DzXX Z)X(ztBrx@o?wj{[*訶/qGy]U-U;!h~2-]du{mpxٹMpx=4m޹wn~vʇa3ͶN@i. LWBW\7m|]x/6^:_ V_~S,smnA0hpãމ_k̼=nD @Y@=WkxƇpEأ=x?j^`,Wbt:tb!d5RZڽL/\*KR@;($Y8̼f־2--Ad?T[>t>|B sW]fBb3]7!MjҲy懎{FLϝ:iJOa6{jRr{Z~{I;EXru._`+Ga/?{D%84k]d峯J|e )@z>VƷ.ZIW)~f鏲77wu{vIM#\pjD 0`1? ' ޟP[+z7h˝+q:]jgF3q j.=\N"bza^Sѓ5ŝ&8/LQkSVʵY/bR eP4~꫗N&" Mty&z6z٩Ib{2#!m훁³L2]'4ѧ#Ë ǖIlihalF/C,-}f6sks;CK_nh$! +[ݷkvݭ3F`jaLk~3JNדV4e$r(0Gs] ׸Pl?R&"FdV ::aQ|cZAɫ^AiO JeDW aUښaz:5wA)պP&*ppn>¶/y۷i:XU' '׵ԲRQ/ʼnaI(b]SoO]( /.%,y/2E&wP37[s)H{3N[<ȴ$& <=7<#6S?<Q츶$FXAE;4^.FpE=#0M=lg[vY#DN.GЌ4 Ğ>0o & xƻ;/5cڻ5BR_+W7o.g )KͅkU ׿$/_mR]T-$LTf$?Vީ.ݫXw僷wwlCgO3C?M5Tߏ풡Dmݬ<%xwcF_B -wgcKoni inf=чѸ\`('Qtlq*A[dSM*Yi:ks9Ǔ֯KK|˨ Rz_iInRݗ#cwmyL{R~T x Fc4 +g2FUŦ5J-*@R^?\@O~۫򟟯 uo¿̂ꢡ u'uO@m~}^CެrsKzJ@E/^>4{!+ ʫ2w?&\l d-M%&~§AaG{a8ޔsL(QmGs+:2,"1rbrX+B"; V0³:YL<7!+U-T#tT!r3\I}evtZg0)l07pM/aN(E f-@~Ze:sͳ9X=M2+ĝuAfvqܶn1ul;KLԪgS蛳 !:n3}Ȝn3cB,"aV|tZZv@HU&zGڽ[ieݿW#9NN]5j &gve`yDh3)[TDhh q}oZܛ0!.Ub;Aao"`سBb QHll(CՍ@e#z}'0,9QakJu;ؖ#Z@v6MJ}C s7ʴnt^yӘGPAc,6ʌ :Ř2:ZPrD`L|z@[ttYFT^,69[<;[I^ En1Y -e!ÛC*oFA&>r,H -'oA>z6yyӫs^Iz\W\xZhcEPa25{9go@Vk;%s9/9zG6*Rg]E1inmOwo%Wa|e3EaZL/(rDxƂpK4JKq4Gwy{[/`ä${ޒ$,h>=HgY|Z1ev5qӌwrۺf>xF_ AHRVa S띱Vc]{-#chnDtf:GɧsJ79SЮx0$5Iys ̂HKB9UaaR"庐m,]F2M172@T{n!Q[UvԔEӹ,6x\\;]R;GJFHq;88C۔Nԫ ,3C]-ѵQz]r%lR=,mŻLN]}y Ɵ,'E#&jr M' 9cc/cZad@Pg($^2N Y$53Gc]`>vH5E`u"4 rکUuJ{(9m>W$.B9hnUJz`r nr*0?1e5;| ,s)+8.IcF1 )`)$$8"s&9>_:>߂u_u(]2L>u qO˥;# W-㺸u|0p@.,'bePIb7tl$*8ʝ7\ fȘ8P=0#O1Tk%i-vJ ̔%9T?/5yTHöLZmfbXCrɷ?; \Z(g,㵬S$3Q%IEy׷NUCtGesؚe3u.x.Ց!.[wdZTe|;5&i%&;ǁv\7XVtK6ԉ.If_e_-eܾN (gƢppҁme$a95a[.'^i=v8hnr-賜]7sD1܅H|G16QO e(E0B3OXDHs͍#cPMcl6ܥs;CHX&Ij{fri}! %%c88/ LD+T j4!hWZ,3BR3;-RzJ15i15/aGy2u Sv3_y04㻉k0&GfS~dM0d)MqQ F&DT#Q+T(EdF&V*l*6.$#3nq>sg]29Ա3F F,@2"7- H`4:,:řAPI(aK%"xMQ\1 A*QF6vκg%أǖ-Km $̢Q] `DY*" < L˱! L@tJϟ Bb>c@)- 3M1aZJZO~querA֓i'P [%5Rs99 i ʄJp"Jaf3Dp :Z7 I%PL&?‚f45"=P0K#1!p,"F)vQHMHh n02ں@&H_g"rjkCZ\\6FfGӜ(=A*PW@p4ܫ  cECᣠ3 y,:H z7M3椴P(,\_@H$CB "D ;xɕQr!X}}W̆Fl<({ yjr.*lQϻ/7ʉ*54D$  nT2+)t0[CB!R\q3|eg~1Mk17@S%L*&JMԺo #vk8Eõx1hqkK`9Zʩ0>ʨ\amozR|^!:A^wIxx7Ǘկ⏡Wt*lU.`^x*y8*?{Oܶ_AuI_ҊIսNc{l'$ITEɎIQ$K$eI;WbX,]A3Lشj*FF^g蒁٤Njl_]4dm@6D1$V:G㹶㘒,!gDO6m&wd^J}' J" ;;hn<39>d`,2x-Yҟ:=s^>a ə?б~oFh퀳CpkWcPS(ē̓EGmB՜(/}ty- 8O~+j6,9O3&2LVdyTV52͹Ⱦww3}fF\$UV5F>!E7jC SAaͺXߵՕJ,*-z,v @XÆ_,ɢ"EgYG-HOMlHWgI1[6=0\C G%ǓTt%7@rdz.9hhb{ I+HgE:v#ژvz57s}K< {Q5ͫBMdԔUxSuMY.4 qҐȜ vQComI&jni*S(9~nEVԽ/|O#a +gA9l%7JaiC!{B 3M ri&L9A'eVY"]'(0 P;aAq@%E=ZN$?IQ {z]?83S] e,M;IVa,rʋ׻L#9gҁJ'/f4@O ',J{;Gn*H[X纩?k^Y{aǝlIl='v#c 6 x"Xt]ԵO9 $iW2%qq"w§sLl-l`KN2{zM}wҕ+WW\ʕpa$먚d*"u6uxTu A˞نQXjm&\SlCBd0Xwҍc܁RHNdA-^m s0Z?F=ZM~xg<~| \򏟲{ >̵8S3b7Ë'`Ab|?J؎1(m܃Vvh`kl1m dNƯXlϋlZӊ!VlB:f 4}u32~.N +bpu@~%[/D_& @=ڻl{wx8>}{+.b/u13nz=X QOϟIŀ|Gj k|Ȇb ݛ$h 1:{k6}T06KTPVB~3`?=&K26Hw`f6~R1@Awrwj0 ODpć!H! k(fcEE:hN7}C?@=~S'!\GSR w ㊚8j= uz+lCĮeJQ$( vnCĻȸG˽Ƨ㫣eCƻˣ=PZwzh;q.Z:G 2M&)_v\xo:}uc'<7]~Cac`0\?>2j"7jZWU۷ hw*]vNV;d@Sno@@vHbnزb=Y4uIEV!v.iK`>Bq͈6[ӣ~>=ʱj˾j$QǾf :=ށ?aUF-q[x |'P3+*hIOWIh=JqqCwfk~|unL%xhrv $l ؁ώE3yO;y_)Η_&VrD4ugjLrEZ3d]ݡ>J[mylT%:٥l)ff9^@][T13vVjoǐLT{'*bLaU_K!Sdcl[Qn}nO@N&@w=oԞ /m iͤ Z?KkWƓӱ lҤ믆_יڂVK֔v@Fj݆f.Wi_.  H@g* XQH+-ݡ(f^#SBpHa-j-,д/Wal0]2c"m݅&N3pCwE1cr7 % ؿF[b+:x+)l׏g7dCV) ^H~Ro|ݵe}fƮ'گ/[/=zzy7P7["||#/ӳÇZYvTAg2@!:&.EAMj-ע7N[t-Gwq؝0=L$5rV!όslvwьE:zBR'Ã(6 of>Hn{)9*SÆA $"_ua7 :]籂ݱ 49vk߳~avyxXJ^ދ?%{:0=2U  b4KOf3Z9]9d@]7wd"1*{.cqj*hdE{^GS&BKiNNS DW(,S Y6tYlCy=JG/: kf}fyB t um̯:=v=C):`"[DEiez֑՛\wZ)1~PXOg'.^2&@Z+m\jF#%Pӂ*No]˅6mE m6 q0j)61 3 O J滗ojj,׫&:^Z=Ǘ\=ԬaIQn7 MҦx!t&m~@_i($aj#]Eޘ4B,5m+~@qeRQt Jƈ+D*"[m* G\9rqڪo.8akH\-"Jz+[qUt%jA ֨1 UMW ŕ,Kn3(8m$:VœȔȫ*dgA 5_ܶzYvtrɩo oʦe_dUDQ-a%&fLNWLNON+A+IN+JN[bvt Ӫӂ%fU$fBٗ"y51;:1;-Vgbvt)i61616օxa[t*kcn촂촪촂@^ɥh%fUlϒ}Iԝ/1 hU&1qiW“I̾\~]g.+1pbi&f_/V>Wbve%$fU&f_ ONW}mXiUiugHN.51;01;]JbvZmbvZmbvtYr%$f'fKJNٷٷٷٷ'yۓIfnoҠȚI7a]ޜ .56bkh~ >֏xKQhwVmUj0C%8"Vj⺍(+u/xcܛ//_jv [zPV}E 6 AQDUp|MQCu*X9!M:ud<+}f#qU4Ƽ$1AUa0~3i+=bD3,մ,"D2Kosuf/q|gY|6NJ:%'c9NN]_IOr0IL1@vYy>[.%ǶaH+*g鋒-uE_c&\ 3UG=_Ð9>^!pަ;EzDumf^a39q8Zp-rc~en@ūlኃ\@d/voaqr|hkGw2 ㌖osBe1Wj(=a8RQkރ;'mVj&ɠY#➀iaª1+ /3dP@ f`|ϸQ}'cڅ{x1г,C7EuVP A¿֗*1?R+rǍxib$rY~DJl};7Q ZwR,[.w2|Z{ E΁[Pvxg }yrv%1F@vhU$T~ƯI<~1EWCu@SsouْK*h B *V+tՑeꆏkj ڢkRlH!j*mJ>uLO|A J@=C1EISi: XSi qZ~\XCg4}Mf9*mh;|eU+Vx|YQc'4lyl~M M59VIpU&(\8S%y88GbW bcjί mL=KPI746 AYexn7AIGmWl?*ďh' ()B#U9'["m!بk1VNI:m>Й/bv+m#ISRG{އq7"9,j5}#IUTH) زXɼ*ȸ2bh]mC{-;6}#-ϧ?j𲌛SRtgls[rAKuInӢ`ˢ:ֳqy<df`f`074\02(w{~0uWN:JwMAGjb 4LItFE9qa1q xtmmn m Clq`򶜇+lRӦ-<-S^x ^28)Ë>n hwMOɍ)Mᇳ-ԴM{V;:6~?6=&UV]qdn 8.L;J+uDI+}~ԍUIQe8=Nz\(URe qrZtwSw9-|/x="l%|UԳWp^glB,r\WyJx]?R2a]ԳO }sbJLc\hcAB9<ǒ?&<|Uafv vB ;BڃBJ=.`>/vzŲ\{r2(:o;cƌL@jZFL&Z i-u:ݿyce%o- CRb',@Q|0arroDAǝXVTt2AsnvGѼ2ḳ˝~J-aT"{e)sh-JCR>RL)8oW12#XP1g&{k92^:)C ~Xqn5ٟ7nPhrgPhd0$OP fh/#HHLBDN>x@>e_"88f aH͎ `9AQNY {@Ҙ>vϚ110FHHjbKFb)DXҠq0I̥x}`>vH4QujŢvjU'a]=j~my<{ŏ92b8^V\VkfIQyezvEj ˵/ʩ 78cv?r~tÌ8p#ӬzMutV m1^F $DH`QIb$lHhS}1 R) ˱:z<woFhRC}Od/ԫ!cB8`AXAXro XH9H ÌМ8>'fͯ cʦm+_)N]cܖ:M~8TW\.յ^VE, T9yBa)O{YFr7XrOr|ZixZ(C(2ȗ5R4l96!;"!N..U݌r@&!P%4 "9^}HZ鄒1 &*e5BIB -gNI1m:{zLNBڇΗaw D3ew|F+_Su$2DwīD3D%gu^weG8m52Q& XɥBa-"#4V*lj6.T$#7nt8;gRsG u,yq1`d8h8txHQ̠h$0r @& NnpDRQ B=㬷쉳 WأU-R?0IxGivK|ce JR LT x$$EcC ^!?F~a Sj [8fjǑcRKHy:!)<́ԕED-7e=)IKwDhr[8*I0LLȨw."g8O'pHaGM~DR-數A44&XHtP".j 8 UF[,)fK796Cb..z#RnjiN E+{pMʚE4r2p_y^QP慧_Xb!$k9_1ļ=MN9ҟ%GzRRys!U}͑߿@Mr ivx8ܥ*`ųEZ?Sncv潚;vm{}T?nfɖOwWm~+cb.%Zء@'_hY"Mb@A?a0gli)}X>Ӥmy&`k U cJ#68R$HysQ btGi%0x$9R ZE#"As%H:(,@;KwNWc% `"{&CD{fh JK)H1s$#Yy8 1 ;u؎ߊE=MǓoJzZkVu?ܽWm.d<9O:q2w͖UEMv2OIܩG!kdV!Ƶ]ΝbG@%DPHY3!ZH-lϡ!@T>M9DŽRn&x92J#"( -J))$zos0Bd UK.Xev Q%(:+K%)qdTLGf !^ $OG"…g$O e8VR0˒a$OLv >-?]-c\`$kƼz򁏯ee|>*! kf1q@ma/K~ᳳ N,j?Xkv y|G W^_ 隷Tz~zLV5UM2j. *=U7*b*"`2BYc{k4MzAxnB]C LNj?D=RaH*Q6xPka+Tq*A#o:ȼ0!S&aJ;f2Rʃp-\r5o}MgOvIǔje=`e#%}GJ! J1b]`ƧK23< aݮJ +,U >Wؗ H`1dNF`4N+Dܬ ىCXTơGGV`݋A @JoNωwbZgyR:bm[l"gji Qd$#MTRaUA9K͢ARM;#H@zLb] /g,|QGR00H@1'KD} =.|Ժ;,QLRťMT pkS 3hsڗ,"_|0`f8bc]F ԸUILorY} {8y ?~h[ Mch\uq9&NpWb! d ]v᥏w:09clՑJQl7()E@8MH)ϣS֠ESV@DQ!+JmqXp#뺰:y9m ZHϬ' "h4Hh)YM4R;8v\o4J =ƽsp6ŽZbb׊br6qc|&>nQ+ĘwY0YiB(I5\@'ߢ~ [4ظEnV{,ˣ t6֬V;Z@,ma󅹞_m?]h.8y-Xhm@̨]koȱ+d ~? u."Oi#<3,﷚Z,[dUtwU鴓lrD`L=dBj'^ZFشxw܃*K3I`= H Cj $&( &kpO>C/#w>8*0EN93X?-1tOu0>7p}wђ/]y?MA;cwóAeBDn^𡧠NR*#8I4L/^.P+R @6>dF`ZAZ|u(va{Ln> q§p|`QW5!-23 hI*;?EYB؄B|ay{Vtj{)r*n vݻb2"bl)IyCϤh[&?Ц=Z9^U]T6i#18i}0u>~ ̣$6ʐhs;ln1O?c@)- 3M1aZ%R^{lNVgPM=Oy҆z갞 K%5R 4 eBF%s!,8bO'0ᐞu3q앧dxhS#4Rp,b$:n(h5 @׶vDnShy+Qt%"|m&GӜ(=A*+{FF[Untq/љV;HA&$- O젨e|)x*Dj5_ ‘Ԫ+?^rSsƉ-U1J$d{z ˁAOwd{"^OıHtNKxWa'e9O>;:a OHLD9,(3G'sg2#QoT4񘸎hreqc*h%{I=:a&]0Y, _Dta6nyގH}" 6!ĢgʞXt]: lWPJA)A/&MG_]]t$RL*Ud9c0ʇk!]{<ڟ="P|M9w3YQotvV_ Jj>$O`+1=f'KTkzlY-i 9}.VNR]N(!>f.?;^# B()U JBSv=~ sa͢Mt wJcH{}[aԋ2ު˻HCq>3d =!"}Q"!<K9 Q uBf^Z9]AIF$qfQ|n:S| H䠕艟VxLO5&ӈ2rꤵ#6>d{ba/"º*z/=|^y/qtS~|ʓ: e & ¼&BP5; *1j 3ꄷ;iV;8?i9hۃ ap8XœLT]A%Vx+V3ź *V0>Y0[EBy T c5Qa:߽FYyr*C—4YDljK;zI4/qQP`&v&Vq*}0w2OXG"\JlB Oc*5(K)µ`8zŪ2Q?H _ux8ޘPQ̗vTxGDTHd)fi%1P73Y4| م)'Y/AϾ6˯^X=ʱ\H,Ng]TPϫ$s,fTʋuCSH껪z(ɟ~o^?z&뿾W e,W U`UNQ}wU47/*֢hlw1 m\MGs_N! 0@Ӿ{ P91ru4mO}Ujϗ()E@8&bLk`DV 0sĂ q.{w1h!=6rn,FA$KI\ hj` יL<|99 BNw#kpU#wXhgИz`X)1˧aXC9(?Ɂ\Kr1.eR]_@^̒-b %>1"ٟ:;g屐Q)ξ3h8,ru:|eVtm+ o͔֕hwJZid"zcT{k I.|+$T΂#"Aspȴw<];FLdg;ysGlem#,g[e2!޹e?llz}w;벒4\Mp^<~& Y!(f=C$yͯS .ip' qΆ?5z sZD)3ln%9 ,=k<ָ)'hIêG|;ߦЯk췭OmVQ HtG`ݍh͡C Cj1@X1t3  *<\pTw{P-0H>½IO45O)[D9䭍y1;˜Ӹ,Y+Hw߮*ewb߬Xݧp|`7+W!‰&1 xExx>s笐]YQ1ȜauFHDd= >_Eh˻^oMt!|D|Ȱ:NQa/ƃϳv۲GsH΅E0r>7ӳ*M+ܜq4h4BkM̨+ #*erxÀELT;2(h4M`VҟnUG_p I`= d\)]ܟ*|4[Q:r",Srz3fbp=ևy2ioK7*/ݢfWptz Y)g6urO7B´]n{0n\]ʙT/>Jjژ{jʽ}Tכ^oRoª$^A6i0iǠ<Eނs+U Rlff@P#x?{WܸmֽI.iA7Yi<M2VOTQJ %K%[e[1/YApwX,<^//]qY1fŘ/ǘušSjbgfHa<>1M!^eL;n*} \SjjsC>*dIm2tQ~p!UR5I x%[wS69Yc5$'gtaHE#+OZL3H[j6묮W IlgpL hoG:i[VLDpc3t\<717픫*LDI}-}&BgOvz)>h|>f9Q{ .u)d4=`k#S`P #"F ?׍ZXYDB7`@HIAkB1WQua΍vu0BɁ!W؅K2DBy6pV&aΔoo/Km]IF )pbU*e)58#_'jZ&$M䱌xi3GʿXu+X;$B Df oڤP $ ##BP%2PeiO=L^ڟ50s^7"y5X}GM}Kw hquRH)ojmD \GF %tjsbhjKa8OM_>ʨ@~4]ԔރLޏW/=wL{TԾվE`4վ~tA~c--_Ӊ0 D2="H LUGdޞ~R1{d~W ;N9TѰ4O>:w+vb0t n)GPg3>7:Uԃݒltot4n?AhBB ip* ]jQ/CD(MtuEŪa(IQ3Xd)´"2Hv6$_f u`SP26b.|"ː/]`/'=ZNh[N]Ǯ$7$ETߋa5ߟ0Wv4}D\8ͲۉVRh$f4TBD)3IDSKfxĿ{Q%:k]hٿ{_ Șg$n`MDc6BbL% peDNiJ Ҕ',BFsb7?v.P=I2 Nk7h6u Fe禆@Ì]k0~iQO/ݞKYfl:J󂍺~EoWcs0.?l~wco7m~x;~m~׶f\7Nhw^Gd a߰@$ 6݌>vƩxE ~ntwV&  @-6S[fQ>w||J`f39}jp97O$a*W06xPP iL'(*pUL`<k? mtkGmA(vJǴ ?Mec~p~ѻz(W~-(mpV`רU<ٻݼezT V{]͏ʹq1x/q~Eϛ^r@+O61'}鋋v's ֿt`'?_e `­}pWbrRȋ;XަA5Jwxo7_4Ƌ֨k%xҧ2;wwe#͆Y\"a:Pe|վ(vRh6z `nW=/֏I}9x7o`jbp:n{]QpEC{@˒2|L=nL}Sڷz(KJ ܔ7t)Gn7Oz0-eߧ,۾v7i)S߾[޾*th!7o|Q49"Fs, ~a),0U^M0.@uߧ(MTST +bL%E1K5pDZ;6ic;g8NCg lyR|d=F-wPqD:%$kh];"y%GU"^<x&rpc.ʡMkW"Q7(8lN0[qѮZ_r(c\ߍO|t le|(qC\J\- ꞫU  ߁_-(XPnϞ_3y#xjoZG]pV aiH-|$)va|h"o4SPu.Z?~o=)pN;lc`X$oὶm} .L?mP^忍:38LFlhO' A+c%{囯}Q,Jlm݆:J%N. FbB/8;\}b B)(x q𝁎ch:7[A@a; HFm&x,V+8m& Il >'}N?7F&Y]x]t5@KQu5rJW{;Z$@^5W0k#xmLK6!]\U-+Dv'hm0YqRŕC9zHz%ތu> MJ~Kg;f:u}+Daj:v /ۍTڶk;t8Zeek$TsǕ%T.U@+^LL"iyוiRyc8" IReLډ[j9LEHa!1:i<leK$(!xxpwP?>?>;8T?n6o5Զpw0s^o η_?>Vwv>ՋF*^l"aW>v>1fhGxW=߃[.]akXsS[ , \`S:EdQ$0ߙ7*Bܝ^o!ZkeZyR+  + dZIWa $m(똥T%< !M e*Y\!IIļ s).lbƼ;qFRAsrw-vz)5P'J<&0+|g}&VJLiFG2- 5ƀ ^wڍ?c:p.?q? Ecu%v/MS35|`ֿ9?]6]jrҔw(aȇ4 9xS<^ v6`ʋF*?GfdaeDxjv{vE!°> TϜG[UhMpp%Xx#_{'o}sR?~|wxPhn3w|Lb(NDPiV*J!#DJA&N &">]p?6vRؘH !Nђb"JL2,3bb5NLD汌SP~;qsLJHJ$(II3bISQB 52#Dp<ǃ1\Vk+%rGXq!4PBv- 4DF:5ZXKh9I1f4`P;<=[ b)h6R!"eD"ЬSG-hbh­2X̄NO<&#c ^8N,1pF HY*A 7`#4Tf[~~xo^d@EUR0Ld1eޕq,B.0`{mPZʄ2Ey@/F p@knD djgj/xLkm?xuq*c~>ޗ?~h)T!L;dQ+ P@腆ZamȋC( ?6#3m-`x:70 .b{ s9րt ~/?jD[rH88X b?P)`rv5ߔywnM>ki|fEgK>w=["țcwIJ%W ^rh5ZBkM̨3hQ9b0XJB8@2mR08j;F8B\_6XX^UZ6Wk)DѮ= -o/`ئ1u),'{#3>Ysf0Hmѧ^O+s3/`#_4/gwtz5yvṽ 3-=BGMRGPgozM঳_ұq&FjC[F m\aWcm )(~N7*[:iKNw/~g{;T"Z/ 2:̟a\bl.eX8DRɓ\s%tk&=}2Go$0gϯY]bm~~n˟wv3vvvd]y ZY*tN!)e?JԛѩdъCׄ?sW%]o0BhLP nW$) kV0#(,QI1KL0&;KtKv fj-!͞:]:v߀΄ӄ]Ovg6}[ny-w3`0nw;aׂX ۂmfem@bCⷈ mxׁq˯#}4;MC!/zu\GGG=~#`_=cB}9ʃ,c\g'c`)]N0`@,4 OY[Au%|)}whxW3RjIK%9wQ|gKVN"BwH퟈ Ug)$>mpiteD01fYfVoI0w; JD.~̀HC@+#:޳Sh0i eHd٫\L.@'%mo~Tao$W7ݹzl+ bw&]i+m=^UHS??Z9As";6=hbW;y :(fmMR2(u⴮߈"6vq")7F'-6?VEJ{\d>ZR`Slm7 ,3[aEuܵlqz<46Nĕٟ'h`t;O (j(0!U J\B:`^e`.DjC؞Wa iU3@ĠL1hL> 'H еpXG@tw9u<V!!: H[XA`Ir,aKhiuvHN;i;Sl,"UsѪ: qN=E[fZfq(+ 2z =;U"5gxT`Jyڼmp dcQ #OA6h9昌Fh!qkvoOf#e$!BJ"8D aEB $`J>2,cu=R5mlm+moPL%aLσ}p+EL'O {j򃙼py4׀%Fקw)z.y}F<o 0G&fs zsEڥ@ ɳ=[ hBeB?O4\Qpt췲2`kn[Oc1ɳi[޼($'z0׳ޛR Οw az=N ^{uO&A/|iet= j<9@OtW|K(2d"ԙfw#ǶiȎhӐuMTu3zgvQ# R8I UCQq,ҞS.ouI+PK  e5B $ )sY08% 8Ekc$ml~GyvaheS6g(L hM״'Yl]`cv9&,Tk0[88s:rAg'uvҌy;"4m[PH Asr (6 E BpEpt6Z;"B[ӏAx+KHhLs±踡D ](R4Jq*2 "īخ!KIW_Z#cc-C𐄋Ȭ1cS 'HQ IYF[U- 5%{hBe ߢz3S$˛̪'}G.r ת.t>N#^cTyg A6 C`1._1ļ=NHݑ {>ͅNl*}Cץ߾@vhE4ʌp8'5y À?.>]HP_ZgE+nWq68ݮ;=QhZzFkn[4U%,;б7hPz9#ڹӆlSd\tw@9r;3 qn >f&U ,H)+T<99ce `k/9vfpLK21π1DvY4a#A29HysQ '>clڂXT<yqz}>7!m%6(.[3oɏG'CYL!SwVNY|o8#d{I-%3dɒ1P:~ Oc䇓"og +1&L2Ɯɔ!ٻ涍dWPWIR$y-t$oRHIa[ߞ@R7%UqL3MwgZjP k D!Z g)Pʃ(<:aX$,X+ф#FgڦW"m=A}ߞX *u$\"1@:g^Ukֺs]Cg"id$Ym2֌N 4O|p`2Z\rKYx#k@s~)õOQy9-ua)О pNv?v<\qͼՙxmA$DgyXr8`kN&w~~؅<T9b9b0ήΎYZ j uϲy'! 8ה>K/`'D@qH6,߰1%-Z.J5eemW]ܻ~m\v];tVaHLL`a&B8$$<`QE5B ĜHġ%LR3h46z+} JY|J 7x"=,jZ) ՟syHP Wảbqs bo9e%r%q,C,8" $%(XAɍ`\(|# "^|0#RL)qJG, TBJ!\KX}BPqky ;/8S#6],hesIgw խPCXtuVH"' #dXHq0ʌ b1( A0z(",P\ &זmx ^5uBE) @IĤ+7 P fӨwAlb/2sC? 8"){ zu,1Q9t7'/XS.$Rf4SPeUmM\`GM?VbzcU&?yWT>{_6tn[wck88//lO*gb |K}Ko'$0ղÁs o٨s&{&՝51~{M!wWLN#d(Aۋv%ϳEqi'8kyWwU21x}6؊9L7y/E_gpV(BF#qR{?;i?MU^<{C<RI^\A彎!EeE=M0SO&Yqۋ9ɏNwwڿN;~{{t:e?¸ LdOC8 '3Q?O_ꫪMQlu1gz5c="ĎBOS >EONEI^آb/4Yq;IQ(Yd`)$0aZAĦo$g(a(1k;,agY&/7h$AtB )6Ky@1 48F&0z]g.u*_$S>S޵F[Ԫudͭ5!G5enS[%aơ^l;|^JךBQG^7۠4L4J{{3\,~6e6Lp !ؿvfCbV13dr"фX;dQ@fkB9#ʭQWª^_PFilQ+pj"5@R Q8GFlZ_NuF岆{ AgiVUP Tu`7o{+v OĞ=Fdﺕ3N`6z`,cUS,׿k|k;,Psb7лhu7VPd-VYzTv$K}G-XS&^m04ۇk*WA[T;'m~$vo庣q@oS;3Iqϻ[*ⱖhHcqR*,Mt+&F eƃ꒛Eb2bԕ7ͫ6s+׽@YVIoyz3 x)LOqIh9/<v t+hBڮ 5Yָ a!gSaL)f+Z] 2ꢫYEZW[A]`8]`HmŬ.teRteQj%b͐]YՆ,\BWVNWc+Fp!WU}+ W#׮,J,tt W+NDtpu+@+1YvJ5] ] ]`)PmյqZEӕE]$]IDp=nؽ~am<͈645ܠgδ7fk5fj {tu6UCf_L_$b\ggq6[&^jʝ &cn{Phcz@|0Eb+-|ĉo8"əFVqŎ]9dK0b(}-9#H&,3 r=','Qa7%>C$NˆP5CЪF >uh, 0h(;FtREhteQ5]"]iFS#Ƹ>ڕjCW\zvt+1Ћ&bG䥵}h[qh $yJdt%tԡLkDW*Vw<-SNWkZA"(^']폃+H]ʢU|pj7jrI]` ;EWvhANW%kZAb1jDWTЕx]ʢzʢTkZE!LkDW.u+V,veQjJ)]`Hm.th5NW%YkWHWv RstŔ/\ 쾘fF,:uZ\] n䤥̓&= p1x%WD\vƞ4zZ~iL` QH kaQ3f0IPk`8H@@PȐ[ l3 o"?ml?yw|rt`F= J嫲 Em~/,ķ*3`O.of_*[л,;uztpr{w}|plm蝛rm)z;,moJ[~ܝ7[i)2=epxh=(qY`>GA7a4a @A6o㽥Sb`gtnY;?|oRӎM~Ewa"7k plBVkfu7 8,Jڑ;jz뽡 u=~ߪe=_4ޥ}+Lekף]^bww=:ގN>qV}N(%cCzYлnisUpio6y9o7& ~-Mm k_Wy~Zq4 ݵv/gnu- sh3/fTp7jh@#a!xƔ5 XK[0by{Nu %EvN?>T=w==G1 }a(<݄iИ}#h0Og=~IQ sprLy6'`{d TՓ2<H# S)0iEcn^YNAr) oUŻ 6"l>{x|"l?hE`Ab?J=|V,iY׳}\w>^.MLt_eL}h)[|0¢0l?4h%|dQ +۠õ A7v:HAͶU_Ҷ93qovߥA%>* ~g{^xen_GU=OW?{["#i_0-6<"S>>W/)Lq~p}&h |Stؙ4VߗM ӡ*r=0yY6U>ZVz-Ji%Yщc?`Trq!U/q"xl {{&j+S;B΃/3 Иy`Y`jGɾN3^elD;q'8*ޖ=V)-׾{;֝K~l" c4g铢,eD@ZEIכ W񓢒Vu Xgo;]څo,J޽tVڽ5ŵ+ fE>^+&tUwSl"EzgXҗ֮ǡeq('Z +=& ODg`fFygv>lo&^9 pM/ť+?DYЅдJ bsEZocHWގw*l󈓕SzmH09ip[ݿq;3wyPX]w!ƭo4Ja8K&I\(/F eƃ꒛Eb>auMW{7Mo?rK e 줷TƼT3Oo{?Z2~K:@!W'yd'Ȅ]k6/Q+%q r!g!s~^'?%@F.V^ǡtF4BL׈0ƪ6teЕE+޴(5Y ѕ]\u] tXv(X  :ѕ]\U]ʢehʢlա+uhkmH= #! Xb` Y,aS$L IIW=C 9GiLUUUuu'Rs7xJGﻂVj{u+W kN <>JsBGL\GzAcыa}Hgi'<>I&~,J,h?O_~CӔ3K`RGQJmƌ6>fimw)⠙ 33ۤvI^ry $cLK21]ڴolu~wڇѾzTTYԃpE/^λBkez[汯ubVPmN,T%)n%]Ǩ!c`3eqc*h%{gҼuH ÌМeƼlN"Tj|pb*cSx($ZqZ /Kf.{]obƓD#rS2WKGJj˫ʠm0PJ ç5/g\Bf7\R_ ,Ce/,nɋ7OK)}?Z[.F+YJxKYkr:Da\VUڐَhO:/P9Rp/nA0]PL0cf\΃XG:3'l}p:0{s99ʚa;I\ ~Br8?D!_h9[w-!Es? nYJc X,3(KePRPZ#рh^ VdzTnR- ?V8Rk1T;Z5Dk ?9@]= j帷z;ާw&;+iRIVNNa2!ܹaf1'P8ѺQ6ϯ=O1}48.{tVœ}oj6|V!~R#4ǜq,b$:n(h5Ư 񪗶<"жynX؄xrqUR=fLs l)\a5JFO4rzPt]'Wo1`emލjES|߂`y?ͺ7SA ]I8k\ڶ;v/n5yr('g-C,_s4R'kx2#:ނrn~H{6ʶ3Uvd73${m{Ϳ;f>Og0Khzہ!^oK?zս^c5(FOQzVR?lk>fX^#`lJa^iGjd~rVk/;lO,} *`i bcwSY?Pz ga\uRHN5n`I.\ZnbCup6l~;۞/染a/"{f Eb o";T2TՀejiGn 9R3X04L`D$4=9ߞ̬w> E3] ,Rg2DOwt1E4#DR%nPŜ(aHg3k{_FMkr"5\ֶ<{ _=mA>|]Ao{b|Љ;_ Y|o8#Q-=뤎ͽvaI(Xo?jAHs?Yj1p}oog]XbB4 Te9)B( }ڗBmCr7JqTQ( Ȣ`;X+\$JKFE(۽ڃ{#cۻ[#1:鸻sز[*X:HJz>g:dKZeëy:.d8@R@Rep`a#ϧA.tгsk\6O?~26^V2|J%XARG W2""&ZH0<)c"ҝǨxq\f:jb w rzzkA[%h{  pչf€7(VP)Ք)k,pcof}EǷK%v$*#9X``\{}"S /Q0!S&aJ;f2Rʃp-aqyŰ]5%/1e9@_m.yôlEAwN #XHq0ƢR`eO e;gx@u.R Wx&[6t~G HP1u2`wZI%Hͪ (O {%(Py*%ԢѴrV.;+Ikn=xK-xG!:u)#V_&ZX$?0$ K%VQa.,j$XE;#ב1kkpX SJJ!è€ "bN{FJ[\ ,7a0%}ӰMy.Lz4Hnp=I 614q嚱}H0,fƅH7] 7yEykGyw+y@{0 +0`8"a ]JA),ʎ?-Xgw@.!`K/7WO0 #>wN.'. ej>]9: &v > f&W$Qnۊ^..ΗW kʅRuZ*&X]?aT9LliUzPܘ1Q)z9\=x]g5Hqט+q4[ƶ V|ïfR8Lc5Ԥj!0XMYނ 0: F0bZɧ@/]:YcuɦZ*L"ZFR% _q c(wDs?rRnWhRhӧu^,C|KV(1+*!Hi*vtUhRlB_~x~We\ ބԀ"_#p}s%jo]5UEwՀﯗceq^MpW>W! 0@SA=.4NTm6 $TvRq|v\Q c0˸A 陵sf1 `t$PLz& ku:9쳝}1;')o)ƦV55L ȮܔU.'R1!2cسL QjT}vz/zZA:H/bX@pl>Y#Avi<|fâLñ݇aGsịUh4BkeF<)`Z؀mT iY>:dZeKT˳26yw'_{fzHlcF`()AOFeEfal< ?/aN(E3fͬ$C>-toLm>9%]A<76)q9;/'w{>G|Y6]=|1]=SD J2,^.R+eR @6>f6#})xk'1^L|?.=B"i[2lX]岋 4<0)1xf2`$b҃sm#CzcӴ4cA(;RP0|fT$;?<^rz=쫅} Eɓ$qXuGl|}9d[@jq]cgp0`@,jNcʠzE3Ƀnt h$*8ʝ7\ dȘ8P=d:FY^1Tk%i=Z-/ՌפgX|(fcq Z=Gb$%SzL,n;\Z(g,;, S!/F ,\ /X "QlC3]òl=v!@KbթsLDO봠vE{!n=Vc?LMAڏ +3t)Ň헖]`' ))Ѽu{忻MZ^3Gl|Y>6X,'Ӌ5OػZ3hyq6q _75~ɻc2_ ͻ[87{*vxp K6u3bvrb_DzV5N޿_xg+cQӈ%@'~yj|+pgi/D{[n69˧)仧k J_Y3y19N;}?yJdcaȍMrjcev>R撟3u딑e~y3ڈQ7B{]"Z틶 @|e{z{1\>yྔXUD2:e)iL 9_Yƚb1|Rn{U$lc..l%*\LBH{&2%x8-c'wOw~5N9gvQ7ݺ Atuy-Ctw|=b_~ Ruk5%6qDjƅBsr,j5їQJkbEJ:e׃Wm>ŗ0ᣵ7s^ q:}:_0`q8j`ac1+SRZ5\γ3əUȾ UPLpKi Ҟ'8Uo(êu>r"+ 14`ͬG2"ةj-Ĕ͂AEo)N9 X )XeKQ%RʉW32^G/ ȱS2iOi_u=ARչAAUUH٣OC&C> {̳*ewfoildCۘhLgJa#{bXtā'=Җ9Y%鸘;ƵnO i2AIVg^92)G&rU(\*ZOTEjCɥm]R:zfsRmmkhvwއ.Ort \Z4qыdޗZ"tG )4ɲtFhiW"N{DŢN 8HM~H*])1qgrkd)G tg;ADXҟQ ~͸;yǺ+MDMM&@ibH順x=BAlftjQ~|n.G8[~k'VWsF 1oG|Xgx/ގm~o_ݪo^磛pglL~uR+vd|8/}sWq1|̗7pB<"R W 70\^⧥]v2Szl1 ~oO"ZHq6Be;aHVGx9=2nH0T\t?Fv9vkp:4H]O'7s\ Qv.:8PJ{9|-el3f|5Wy/%쳿ם#2?2z82BVrJF<ޚ1hKP&GEO#5{LZN{i!3>T1Z:nzcͬjYgOe=U+VV~ka[to~^43(j_xK:qu/Q5On𻋟&󏿶 ׳8s۟g-=t}AyZ+|rT[s֣s?xuyrtAkA>|T%ykHY8yVW7 ړЇԒ]צn|EiwFO@{{+@[T : tՓm*=ǫ6!c'QxYС}̴0֪_&/#Onzd*h"e`rVo=oNJ/9~~R^d| ܣیg⪥vu{/cGU=bmoX 2@rk~ܭG'_f_=='y'T!y ZvRVj%Cz]cliKZ)v;v#ii7ClZt<4ӧM_,j-礴 )7YU[`:IMEX! 'ZBx Y$3*$ZvщU9u6, )F|}DD0`ý^fW:)ehB,=6lڷ.d҈&BE%Sh* @Q₆FVYs+, 4Tj "!Q 0! "#AS},W+ Y*oZI Q!E(%=)l|,΅f eK/JƩ9ꭕ9QPJ8ʌU9\ 2E%$0בV@C!i++AJNhˑ"re_GZ6e(A*gmpM9`4V@5#JdmI'u5&#$>4JT1(MNXρ@ƨ'Eku*(HHsAuIV)F*=e·`4A3q)4"@hXT(`t[w\ yJU|T.2Ah?h&l%+0]4\j@sDh#q3QA`f;֣( ClXil[,Wbð̨@@6cȄ{9•F!X$IPꋐQV4 , :UƲ΢"IR$a.! vm%r|% j3tNq3L ̽ PiP,`6yiFj3Ʋ,<W[SR k,r8M0 #(K1 aSgAI!0ά!P X[90a+XEeҡJ"̮A ĐpgBQ.(ʨqB ԳlXxAYH pvM L#RD{W dZ4H ՐFTY\@ $D2M{2=jcVcAJI@g6κk/&ĥ"jk1؄>m\J !Lȿ`#waV9 [;yz=o; q׬jYVWtz6P ƇK/P\ `Ö82^H:A8\)j*C[b CΡ&~c2 ȃSH$DMW1 kCIR0҅~cG @P$=ܗTe%&[( [Q /hPu~P& :UQ(L*Ίb l df6e D v2"X@v{$x#I."ZO%-JȾ ^{@#HEDB>`Ajz1΁6D|LFtL(kEx(%6r1B9 ѸcFXt"3en6 vPh# EZA+}MBRhNh :e#srqص7F!t ,ZGir"(= y㠈*CÕc< JÈ" DŽpE"ΒU@urCֆ-ؚO5% .{tPq((* u/6fCYwmJ,pve^_CNڞ"iKrmWVgZeIf!mp!<KT za e:'`Z܃i!m`(,z+-B?S0@`%1t> VIe1k]vjE cYj| Ev1h8, Vi5ʁ9/C- L#;u@#&! P SmW ~]Y,nR?2[d6KyLe19޿{]Sf Sfi:6;2@|NsoAې3>&`?: #`ː5u(e:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuDKԉ : QD݈:: H:Jmp@uD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:Fԉ;1u@t+ͣ!ꀴJ?xHiUuAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Qu: 04y,D(x'D)BWHԑTsD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HAuD$ Q:HA!\|xcb'2F:a>wz# \jAq9ڞI2u{Q>Nj~FROuK$ܑq;ϡ: 2I.gYu>G?J 2uӲؾR͎؁?RZ N9 |T6$[GCtP|dznJ~S``iɭ2n;vlFh~1d2v*`QƟSpQ)΅aRz^4b4rS!y1?òp4qMU hK~ZOkк/c?E7XuO2GߊQ) ?pQ{/"ؒ@cI?Pɕ|㠝m R%9Ǣ, pPO.գfգeOC%vhM^Wtp^QQWY`L?|C_݋7N4&'O|oMi'y覹yFzn-Nδ5W36S\O/9>"TWI[.:-MAuĝ1`p!q'8t4]sJ?y,v _2U^TRT\ *B$ ~rMJRK'>5Ji41<}Nf4w fڳqA<d"[xDŽP;${v=ڝ,!iH#&eaFt\ wX}!WjV`B/_$yv`Zos]p.!&e8Ym(:밌V.O~OیJIG+?'-:x R@TL"X4)-=x$x1ow^ڑ;1 +q,c>ĸ",Jrj8-1>z6E“(] 5G/+={V0\ϭlK+>O]Z3rU)Pqɻۂ4wswԛG#O1TVn4CP!ߢ\w×U "Y^^ad<$̈*]KjBj2eչihd, yNfn4[ȷ@[;aa nݰ췯 5D@uWvX;vy`JEs>ڡ;Jb~·1+>hQ` MvD/o9}kXUGgl "`VäV&4 S|X\g7C3ݡ,P:-yja ykO97r=*7SkQ'̎c^5-WcK&S{ǖ`sAGI=[:`f5]zv޻T{@$\Hi`<vA^; E^;VŤ)͠L\><\d"Ä7,˸/ ˙t <*F\me>7jqZ>/p)XSPdZGA V+y}4t5+{]W9)֖N*uu26yU{i8I뼥 WDGФ8W}GkuSS٭\ m>3[%C<%$߉-}'$Fy[%*䚪f0R7Uug0Ië/xï(^Ň?<e\̯FwNHWO辊&Es-6(}ֺm˕kn\ Gsߜb; .`=F6H7[o';M8AWn'i.5'^$CHk/<`EBF2[|{"B"ֳχt;`^l#d^)/43ˋ,Rf,rIn!Ys:{שhdh B eT:FAzQNKJE'E~O曤]~(YOcX$uY<$-yRUIȖOjg %\OM|?z!R/aidOu}.90JrxLO%ktu=.m6H5ϞkٞC%]]g_; Lw[WCb5d5i>.L%݋P%bzhtN_6脓*\>&p٩/F}ͼn(]u_C@peLOPbܫ)!qQav.eߝ@dfT~K*IRU64XƲF\3RWR !;(*&W&?$ƅ,\2_puCnV Ή6ٗ;dLxm,Qv2ŴWzMpO޾ݫ7?%0?K)X=&xT8ܓM%v[R718~n6&Ha4헉Y ]7@h(q&YQ z406F=SQL:N7E\nэ'$KiЍa퉁1Wb(k˲ [j1lﲔwDsͽ ut+SȾro?$Y0If`$ׅ!, % Pȍ2 Ԛ=hp0` /|Q`Uƈ4*c`hق#H1 ؼ%\ V*; fLpT@ D\e*T,A%z^aÜcNHG S&2cAqNٻ6rdW<,^}Y`1X̼6d;[l]"jI[!Md}ůŪ4 Cb6 uQ )kԷU]iUOkwmϷ}O+y9\eb}\*)>PwQA ' Ċ+M Ԝ] wPd+=cw1,5aWM}1-Iy?m3@MzO68.&iXLfo*YƔ$dKĴ' .Rz8D[uʬ"У4Ys{W+oO};ljn7OV~iE'ġq^b0 `zfW6`w"frr akd'R3ftHozQ[;SlWVG7xhնoO%xMA{Mo&%0kd]'%r<ßٿ^55b>TNГq:GU☝ G%rK;+6S"HY3)U\J^BNVlz ku 01 >Km̱,|୍(bVzc_9 9n{=x%[7RV:J+_SzM-@KkR2D'R*dβ$b:g&!hRVF+[->ΔפϜ92DafO_m>l2L ds&6zvgCۛ!1ypYW!j„74w ;Oo{c}; r=h9_g'jw] vh0[D>sU a;i.pM%r5T۽:PEMۑ4'w-Ws~5v)K7hn -\Cȷ+ʵJoG\2Aĉ'dl>}Ēs&\ F}&\z3V\ck,1fܜSFq rފ$H*)E2! 堅 BɎZiiiDU#9TmT^*]jJ[-[U,M¸?]TB}'- ,F"]4$0&m ^6wb|cNA{仴dtV'S-! {\{+|AKF˄CJ•9+Th 1* IPe]sE2ѦJ] ˥::Q3X @SYxOMfйx+zmvFfJ3%#IɚBFg}TirQk>FA۠+w%άdWvs{=" ӠYU*TIvRj;u@/vWLfͣ >D@PGPa[b!rȖ+mYo}?~i^x /P\tIkA1] "Q੍RHcN@h RI*{e~xK\Tj/#ӂ c'`gs!ԡu]x?-@pM䕏ЌǡɿheKH*2sHFCh*A]uH&;.{-04w256ym,yօ|ΐ$ɕ#h+"@a,YfΔ.BfKG922yjȁNqhvBs 91@oRBY&Q^cu:՟38zƵCPKW!@%bِ4&\f7^dDIfn)j^^9*:%ɿ|z E;fS"IV i֢Id(RYuU AeX&KpZK\9%O,M Q2L)SXIԁWpxѺQbՙL\ՒgYj\G`^ddcJsNp ,\lnrh*ںݏ)Y7 Dw>w~wz6j-GLLX|팷wy׻W8㝟5ϡByzk5fgq&X/-Hl Y489}}4eݸ>'a}Ġ\`D\hMͺ=ljnRo#a9Fr}n7L>܍~>Yyѷ[ۦa}O$DM&9Gk$Y"%$+3f&pcZs^"9NXmUfQ!Jp彫!#j6Pɯ> ~+/%夗7=UQ@{Ss|erWj_YIjJOyٿLภZ:Vn 'yNhƟ[--塬_Ǜ66`-tp .zr q! ºi묭Td40Ϭ1&d\+MEcm|֎0YڽmZ#wFG+Fzȧخ,ycj?]b$2n'i64=[w2)hq9dxmhގ JcܺggVI渝>Tx(tmsC7+w| weNB, SK35ļs6QX>i[FEM1@FE+=1q獯sؽm[|j /k7=keh]l+: <{nC(f| RLslK;2r,c'0 F&-f)+ۺ Oݦ\?zIO*i`Ҵtm<&7HeX[of4K-r!"'pLn3JP㟌Wb~yH{ߕ7`]= K@G]R{Fp^I!';{f,Xljk#KIbbIVKmĖ%fWXу %&'r. Rh:T]3k^־s#Cϋ 0y ӶZ;\K^ꍾWR\⟵Q?#}FgdH ^1)>+烪(8͂cъ1>y2>]Uuvka?h h< P?TUV'Vj?y3i@7^# r&B9&.6lNtw$iA92b\L)G!*SKZ!a]H"$:*[oNcƌHőF$8KH.8yQRQ{tqVQs~kvH!H{;(Bakתz.i[{glo%w1'!ЄS(Cfʏ#Q,* ^++VNP בk9Hsg?p|G܉pQhTT&S 1&hV IR\PLjZ"il,R`b9/\5x8ufB2e ^8~̯<{c=e.r+^xoqcYb`(:ރBa#0ʝKbDB9tҚKY"l ~Mm_E&m'1MАq@R6;[PcOF23輵7r K62A^V; Ȇ:dָQ!"ZZn !IXOqَMonI+>rYɮ9p >^UpLxRrʩ)\a2C}IER,@咊b+\Lex`<&>sGg}} |UxȢRqI^-D$-wBPԍIS^j8i29#vm\bS, "*{[؍Yl8-=}(opK?v@5eLƕWS3m/颦 121L R/SkULY,.9%qOAh҆< c}CYbQ#%WFx`yءCq6, ͉N:Fe ` YYo3~Ym|ͯ6@%Oч5>RHT , 2EWPIKlzN/^ sEBOX&(R((7` KAHs uuTAg'uv$7.`Bir -7$"7JZ !T Ns.|G u6ZqVAbd-)~Rc* sIIS-8pd'ChHYVQZh cAwh+ݍc33onX|DRĬ\| .%4(F!0AE@ksfN .k2-`F) 1PK8W$>iи*Dfv6p6Q7(m;K:3V.O;͆)=͏^o8ɵn}e86g[݌u9{r>?*N35{P di2:;f)(Չ(PȣQroIp@]i"uţ)|z |IۍNȆtX|<-)$  ! $& w@\>iGE抆ru8 9U!<5Z-꡴tC-*~/nݧk;&7Q. .QԤw)pĤd a ⊒@􂊑`k xw ЯSM;ePk笱5Kg 7R*n߄dt3`p*q[ŠIR: d[ce2jg]ؙd,.>^DkA8a%k0[ѹrD2$!B$NeIr.<#:j>1gJ*4G,65tω-oe0=ͼԽ޼s=u?"1:((2 (1hPI"GZ^ %" h@tZI<=p޹~A2zK &'9dn?1 8Z}K}uS/K4KVƂ;o2"S>F3Ԡ:Xvlr6VU) tO}u6\1-{"[VXgf.N^ K'GTxRS%&2 OiZt}Iv(6iy דG f+Y/YhKKU6ƃaRBH9&v&iLAQO,>c? գif5@8J;p|g qEFQգk A94UԳȒF:, Ʌ $jz !CBFh$y 2Fpi㹳j pysi ^:Imᴴ*.ᱬEYyey&N QS"!xX :{+"_;DK1v"rgcFc; )BUϻ{5_O] OjKBzxy2~0?98aQ#g.A[, pFچ_?6GɎ[(b8?׏???}L?|{\ e}Nu0/^4^0']߇_whEנy -y ߯-7~ S<߄Z?{۶"Ep4I5}BdId;,IY)KN-R(YfsbcHwIl-y\Q c BJy-ƘȴByL%  >'': .2C˓0U.X~jt8JY9ND4h# @RRtC11;=h$49J6ޝ'_oSOjCw9cr7;s3I XR( 3\n%W9viŨͳ`|<[VvTn<^ 9b7B`I2͂c8QK3XRFACQ{0rHRIpxaR[SD>"'^q< ƺӄOdZ@N`eU:tNq4Qa6&ZP:eS6d3z*d'֌.<|m3iSr\+{3=< q2Èr9ŋ3FNQalGϏVubr_vԓlY\6~-lxY|rX&&K3_<="\8f.amxۖnoVfnw,~IV ⩊qZ$gn`!sI}|V(/eky:2 S@87%:hw_їps\D4>e9ЪոX5ig ^)ky) oǒ:=n.G^k`/@po3eXK 1 OΩ`C@>mlISMuLÌNȭSILn.:JrB?!~Jn6SX5[,F^&$S (?;xd #DB;r){O1D`I@AH %2[js!N;8Ȁ L裔JoRĵװdy&, EAoyɥ8. åh\f+"S&#dG d9LKμcrkjS3XpMj ARwYR]#ǥ^*Boiedl0xdN۰mn֑GDB̑x5t˸ه~62=o_g3L/-莳:ZnUR7`h ?VpTXW)w@IZ@pKg0&IsS(Ӆə޷pӇ_J:y7͓Mӫ٠,Wި\@4^ K!ܻ\؟g`}u8jS+v]9߼(yt $?&sfә 2Sʏ0?}ˋ,\'ɩMgL>OV>%T(=忟` 杠 D {oCn/͞ӷA`Õ7flBhKZfЈ1+ejq 0ɠRZڠ2qP#evN5bdJJ</9ƂKsD,<"o}Ƞ*c8m F`1m F`1m F`1m F`1m F`6hc0V6`{`lHHcw$N'wy2 D`9L0AsKy$"kYG4a*ΪݻJBm*fX"OY;ĬW逰 ߜ 3H4)'ÿ @9sY04\@IܷP!PjVAd"ml|-1 kG!r)GPhϢ =X9b`"{&CD{M7_R%㔤:&"6s$EG4֝mU) o]WwmV+@L98Ͽ}c/fͮ^,5l dRR@+UIb4R^*C@kVx z;$}y6e|N_q Cw%t*moo(yN IΜnonMH1!<1́ \r-FYMPknMOhJ*IL*IQaZ9J- ǂ:mf83Jn)N5r[Y7HG9DŽRhj<[ 2Ew9lVJH?B"ƈun:_Їh1k<ť" g_orЎ{Skj3ZtLw]8P!z.'2j2)f<{\:t\vywloe\fOj[jQVXdO}\Z⺓''k>.ѓc]'1F e & ¼&BP5; *1j 3ꄷ:ՙobWk'|.mBjzRS٥*@R=Qƻ6Huνr;蒯GJQ)R<%cQ).0㩑PFs?Yj$HU hT35͸JJC߬ $%s2I0^(˫u.rV]z©Z9zC uh^"6F-R/˿xh9BĈ@d$&Y*Gs JqTRV[#oi0HW Fds¼D'6RښZ3BbYmXM{,{y1EeWdJ07Eq0cnqlYVۦy,1}=^}[E,[IS`)"E.~)(U\TPBZX.@P]M*Q|9XJcMXJn3&X^m(2y!E~QfLeK4;+oO50S^^yW\xu5|zUt ż3krl킁$j\X__lp!TӒƪ%ƫ[k7xm3#~`(~hf3^rնJV/Yjݻ A&o a*#aaQjRYd >8뗖R:h;nqRzvUϿt&'\W褭(-+QxiEJU5J5*`QbW^\gpBr׏߾~m?c>ן?{{8ꬦ > ?e8Ζ/ۼijڛ6M۠i|ɀ.ǨWij"*( 0v@9Q[c3wIl-y\Q c BJy-ƘȴBym >'': .2C˓0U.y!=\@Ư0ncBzfm8yn,FARi$%2 hj֯ӈ_gW.ڈ[KquE+n/h[ҳD˰ O=uuWIq<wR3Ey:f"` d{H_fz96Δ][`%XͶEZAZ IMU<)YTNV>_l"eBy'( 9ÞFTS`pMy )32SztJ~ ̼drΕ2Q"N:x 5j=w{KɶL@az҉ۯ]lm欩j|Q޳Q&Ë`yaD2DMG&9KN!d=G2#66[i#1E 62N-^T )|%-8;a?uٵ'+_mum9EA,YYڙ F);qQn0.an_4&a2ih }+G,-W{۱Wӵ7&O ~3B{4t/0ZB{4wĊ'7E5+S6pY vݷ~)\yG!@Ib$lHhS|j&`J>d?`΃B~w\jҕ]WAR-0QpS7 7=}!^% F3LNH;'\8fHPRaiaIvJi+eIU|-+)Ɍ/  ]@%ɠ@I lor'qnwv_-ڴ?t}^Ǿn6zڡza?>~'K+*<,h((-ie[np=#=RҒx{2'ĐdBxKi)=Z^'Sw\.sK@G]R5)Z)CROgYF 13LIa"JG%;lbk;.LvucكkfN6Kb^RQ= @Tolp=謊Y #]%Q&їX] ǵe'U5kuX[Dn0g̳@)ˆ&Eo ?{|hɁoz,G?_[G>'MZAݕfNRwJ4/g4/e+n{w;lb{ >/6oiW*n*n7LC*:eY,p[h%BIR( QLBA#n_ Ugy_fM"z].LMC*Y|iR2L1*. |e]ڂAF7z씥tߠ^M~jۇ+^4z^ IMU˝ፍ#vyC̾mc:,cx}Lj t;6xw?ˀ <0?/LТ~Ɔe?=a Gl.&ftQǑ㋖ .^-;Rź vR[bkh^_^xͭ(Ƣfxi㤇 oS@;[4?Y>'Qrn^!.Νݨ1a 325_cI/YXƈ>.PcY-zڄrHӲথhBIgxMM%+J$jzϖ[M/[͚Sgx맞9\JJfnٔ&| {HˣՙYJ{z9mOnjEn$)$rT)lC|-YyぅEZs!1kMMC[jQv(owak>B!oNpރڝ-eeKqQUa1fiI}Y00N#氞GNQ1%c\Fezt$h_lΉ'ξ"A˓?o@h4y gŰGX E/ jnHLM`U)ӠM:`z²To$dĸ2@DhP0ֲ?ZVVvL]4Ž=nsWFE.bw[7;bi|$c#s4YQaBbD%#d1+YVg,pE;:ףS7ޗ7}&XtXwݟ4ȝCё;WHЃ[3?|9pYVZy 泈s}S>bk_*+SɎW!.侃w:ۀ=G>OkacG.^b/k.a@]@cVKD>zJUl#/AC/&s4rN˔@D3_99&L~~&۷$e?K͡siL}YgddUf PEQlwK6i[E.寮XJ])Mޛ}S>Z& yZB{gX ̚GA|։`![e4H$:1E^_?üq,IObm2bյ5C2AM( @fEhN'v-0J8ldOmBjUG@HAX Q9@eL¯>o)kJܨ=*aK}>Dv^4Dx,iM4ԚpT\L&G?z>q[OB#7d,Q1N0̖HI`|nIyYY%@w}IKWui:+ex`m F͟Yr)~iZdSTOE6qYsiq/Mf`K:X&*Gn: `:G =L]qdnݧF<}Y^ n:؟+*>@L .X3)Y\胞,2ddz6jɂ$ċ/[$VK d m%M$~ jޢ4˒Ɖ{SVWSf>6pXő]+;BCZ dIДX!cE sF 3; ȵe.«ˣ >no/K!o,?Ob>_%l~Qf]'Y,a{rD`L #,|YA5U+Xz@OIn~v{ϟƷ4/X[r: _`Lc/Z_7q/{c+Z#_B^XF]JND,fF6Ջ/?{R)KHN/#QU2V*u;#/ )8EUqjeܐrڱ{ov/ߧMBQSI-s%62Ѡy!BDS"w _iv[֛ٟʠ'shb7,moNcgN{ox6OY\0G8ID>>ad86ȎJ[팷 v.I3ጒfY$̂ 孋j"` ]1g^kaips{BJƅ֥H^ii30!+8 3k {k}sކ#&q};Ì\ͼ\3^GSMQ)(&WFO1ѠZ"ZceNFnHkj3ϣG |Mձֲ~2}*-T \5@4Ψ",`fL#"pl$%dA],Z<6U,qp:4X٩I%g9 Vȥp ϣ1ePA̬sJ׶ImH2r͑~0jy^h&kmH_ep8@ Cq9&8 ) ]w!")IQCǀ%jUuur!%(UE0`#->)4,5g,T-qaTa@@1'KD}2n# n} Iͧ Y14̾ף)f ~jb ndJ0Ɇw)80Ioc %b(Gƅo/ǦMes@GE}\'}Ja@2 Fw%)pcrL:+,]tiC(b!-, _X]MwÄee,.. $uc~>;|f"U>f fU׬]\OnF狏r)KYT _74z>qd50|Ra}4YlZ;/>kʠXչuG%<΃u%gQQs˭*ǎ uỹOۙg/ǐ՗™+aAc-{#)),N 1sY1CX*l97 #$ Fj*e:yCP q`:-RLp72k;m@IلXY+ ݁{&CI8 ^0h wR}`l&d3z,d֔Y>݂B Yk?h7S G?p<^eQ.Ͽvny_f Q"Uj LO΅`kJ > D)&Hݏ|a:FErXebu>BW %ޒt7[;SX5[,fI&$|P (_qFPvK^E|x% 5BKd6:v\3wt(E;G)%F~ ) F+#LZ3>x[WGdTaKY@IExlH-Gп4he&r%μ˽s>Ms;`[x 8I)Id@{KMtkNz QquK@k_q נc 8yǎ943o9YV62€b Gdΰg1&FбQ ke902#:ZR<[74b zarΕ2F$dP)ӎGM];rwLv|0ld]N<v5!r3ss;)oQu\|ţm}vwlȻCuǥe߆I|wUt7-tm~/>û$}F{ZH_iDdO9i^Ž~q_ /!G }\0}bezU䟼&m\iV0#)#Hm|S>/#;>;,މ46[/f?:P0vY4a#A2 tnv`ܮ 8_ 6mX v)ca!#!-7ϭ>ȹUdrί&mXT 8S^\q P1+A\/y # ҹ6K:F ×_5z(!8j`k.PHdh՞4#,RFXp4G/| Śew-Oӟ >_z*fٴ-"$|R :~,+gxTkDWyG8ˉ`,g̲:/sL"&Wu!6/p *nke)y&R^M4nWWܢH8xr (zG9,=PnOCq8ؕ/U4،ڃve_\oA߮uc]n3}Wjg3*O[kI'k|_*,n:9IE*L<1)SfOI R$%*XZYt18$*wז*,Cu VDZPbs5{oYU|lO1OA}ʎ.O.+Q#fQh]:D1WL^^L [zWՆIt[$Rۆ}P0:ْm%z\ˡ&;%.b8:M_M7WK/'~ҬL~ 3ZO'pu&>7h&Ne͵CRrÍ K_ԁ&hVji{~KePFGL9&Hq`74TYUO;Ba::׾k6s M6 ]evpےW@/vw9-˿آ?yB;鸴+07?Y|ԏ.JvϧrfP)@k e>V> W”ș8ƜpK4JK;gY}gp5rdx0URN,Ϊ5K/M4 e_Ʉ<^?\`B<UX-zg՘ re4zl5ͭ|g7wVqE.aHjk.(|8}]`$S{+%R}(:rGG~dn>M4NlhrAGUS^l0=~R\[e)h-JOR>RLSNq9%bdF/cLo9Q秛5}Ym[ޗn kO f0  F=LHxN0+D{"#a:0- 9+g:)QXBg 1S5;$ rd-!<0G1mP#kNGc}A4FHH5pFBb%#1)DXҠ8KFbR|6h>QZGi^4(N]a+UzNi[Yާl%w>G! 53^-q?HK 9z 5;U"5xT`JyIpsvuA)ٿDCL8 }ivhWX|L m>fI `"Q0$B1SJD 6P$pLCK,2":hMGcD%.p^v+O,L]b1s3/> )&*Q l稄͆0"YuF9ᢔġ0CJb`+kTօ^,{b);Z}+x4 [e@O'`2+䏨n~qWW(]\麝Iբ4_Yi oLD侗PdEˆExn^pN烁^\61a0r¥e5Sf] YzTT[(Q_U4rٛ, #e6ff/qQN ^6-`,̠ :y<7yW>gW;~!tGdW&q!\0s,GRd!|G!ŦeScgmI 9ZrL 0ßnX}2իKg(Jh?LOs[2 ̣$6ʐɍI5R躶Ԍ섌36]=w);CHX&Ij{fri}6$tBI dbBhZ۠FH"IHW;2#$ ;Ӳͧp)x)YiFs/=o0]2eJ _TC#4M!]ɸ+JBq^QhrL;gzH_),P7z;3C<=iAS"բۋYd2/E˰-őGAjŕ`kl8[YW*}Bma>$m-H5#!6ÙqR9 QP,@+e)Kl1ѿ -ϹbpIF pB8Ƃi9.9bjC2_c$rl̷{(p)3$G20&F 7ڜքd0-{40*A1 ٕ;XE#0oVGI d9>HhQ* D XTkY BI2Oh3!t6Kdm*7gtylU9xKx߾ҀLvh"yt͠kswfa?:3'l ܮƹmv~& l.;5w7#4U97`͠?S;nFgz;!GUןަbzhtXJl5iJh>O?\uQ~8rhD巖Jb)E4XdL^]IgGnؗ_1},)kCߣDW4nsΥspc"KL"qe/zuRVPx{O.&b}Q'1Oǧ10q,)H!sLzdvJ4&ܫ[)h6J :\lmUè0)>HTy8 P }q@) ΂h3}yaXL)6TSϒZ"Jfh/H4 e'Y5 /PsiԚ)c$pqwpX8Q$hz N_p&+ wW+eEgeQg_MMSb$rCs> rp[ p~Svp܊Z ¶]W9r<*2 y ^'@HFS#plDe-ȁG1NJrYQ͋z;.ff֭a]Dr6&sRX,x,# HU*GBNMKϵ5mHR)d~P,5WpPQF́ Gf]C.d"TmHKcPDȩd"hC6nF<9rPކ^,n~>49{UU=t{adb`]lʖx_ U1Ż.m?w [/aX1_;ͥ*V O' ]6t(o:^(qOE@9=B| 7| &ٿTU0.& 1ker.~(nm%HƳ3m^%^_*ik5)n٫i(ɫ?p-$]NTE9B_s4~=kkemtuuLJWl`RLscw׽-H>WW׶?^ Yٓ{Y [(~i71Q8yN>>=ٽޯ{9)V^\wU,0c#c]URa+|N;,|h~"qf9(~}xc?;98C=gTlP+fT7;,e5jTsū*^.|\꯯_޾7Wq/ՑgIb7 $å$drR@(ɔ iQ6!|Nt}$VOznC-\4Qmt8 BA Kf a R`!JJc9ew! rCiįsT5F.ޝ9g jCwHН>uzC<.IDJїNKSR=J)0dw:<; F9)KUj\VrƬR%ЁxG puϓ J<\3<ؠ }$!zNGP#m 6 NN>+gy´2-CJ{#Y &C&g`l>P>?M7EO{(1o E鋳kebâ|}o}񲠄K=aP {14dzʔLgߝ-u_gy$ggE}-A?lxEr|X6&ѳ7?j>"<`opªmxמ>ݤO?b/7ˍ(_xSK ׫,d>y^oVQ+qX.lF7xNx{OyԯGk6N!b/C:% |u~>y|;K ɦPB/q{c(pQT c].ȘbjJ]朤Bh圔 K{OTh u\}$$;,8's;'$Wg<_jf5Ǖ-7~͊~oa ժj23^+\kIWDz:77#luϨoSkXK 'kFP B2>o1*['*( ])tx%c٘Oal1o^5KeB{ X k#!U2Be2$jB# G^&6R 562ic珇S#(tH!im*1ȂVHɖgd,lʅIRLm l^trIJm&YY霤$D\ /'Կ2dugـ9G3_Ac^HOE+s #@^mIն#ⱻ8 k׾ۻ pu$ȑȁhܑX/Jګ>NqOs +so!mf޹2JI(AKQzQF`6~5 fj!rlU=L!^nEEI'"& ?y1/|2*tTȬ39(bjfOdC/T"uǎ/?> [q+9]W-hm}5b. QqfKEs gdhԗ@|C;m]:5K{Gǽ%*LT`Nj)pl@@A$ )q9M El=?p6۪P& &),P|up^((Nt:d>fc^,Ԕw)I3dKd`Ih)I:Α{ -6¼cvLFیs#W|9;,~s F\֍7?Μ^>_$nbeNWlKb,Ai8`5Q{tǏ(.GQp&wGG?\]'|XhF^Iߴ¿8ba"WO$iyBAL%뀮}fai{8/ח @ΠTUJa+u$pOrpE:98& @  " N~y[;u ̌Vr}}ruBYC 1.-h<hF U/`MN sQ$ؐ)x&n&:]{ѭ-:ד]캀݉n-: *"\oF.DLt ǥcoD_76Dc4xB>g5­;|$ayDayaKNF'JЉ'ػFrcW?%@&Yl ,zHdϬ [-Yݒ,,ӃMMQlv}U_*aʪIh6Tn )&! R̅IB v;pv<>.T7o7,&}u3~k뛭nΟG} rwmVrQ-|+)v9,y֤l1KH&.y!]Q1%]UE LΘ:Yεaڃ#]C^ 0'M)Gr`t] g;Y6eX^;[|;Z|ڱW_zJfP)u}EA0N%)se%|BTU2E}6EN<*},MVՏW[p'=y hld߿#eJ+z^y$^IixeQQZ&DT ]IAIEKrJ rP\2nzU{gyf='ץU-jժ 75r?oT=2Afg#WWGV7P'0֫ SLAȤvM~'(vdGֶ W04z]f%Zx^|]oqhyܔ#Vw47ȽxjލX^0ƜT7NƳ-}>ns;]rPe@MI״2AM%̲3jwocɴy 0]XvJ+ƺ vҳ}W3s mig7P;oG '|%NA'^9ìf`sI;#&j,uhVPY'mK4U(s .8D2y5.p}mSFiKf SdA Yd}Bo6AƷ6^"v\y.;(IRbY"ٗa윕CE6e{рWJSC^BhKs(>a[TdfewTⲣ &:S$p&:2ZxCn!hVg) &o2_'e]H{r79s8\ide ##<=n2#ꮜ&f˒ 9"nu8s#@h8 r(KQo! $m@nBIdP❅qpvYO frA-so] @Jb 97Dfz0 X-SywvNI+Q;#?GF)zHK 6(Zr0Ƚ&C@ZQp;OZ('S vESeMFBȬ$\z.)h("ҙL%΀wԄDgb\J1F"\&GaU^E!?^:7y5):W Cܦcp[qb&s bwD∽eY4^vddv9:Kgai9BfZ=C[f'2Ժr:1[n,]Onn>[/]z}m6OlRFWݸBiK}|RU#u5oa‚з擣[vM]$x^FyN&[\ݞࠑdtnљ[t]:G?p֒.Esq:қk G{>'tOPb$OмsODOTg\5wE!*2dB~sR`bwu_8˿8E| h~sLme@IIK639,1 we+*'#W̽-REj']Zd)jdu6]2zz4zv+Cry3WTmcD3gDDOr+jJf^q|H|UqJ;nSZ"Knr8m"O Y< "ڤ"ك=;p^\j?e5.&ϱ+= :ry8/e?sVwJ`e?7-z$U"<\<_\sj)iXeAJffU;T=շT7 檮őCi6E`y9^p ?Z͂IoՂ'^z΋-^h{qKLh^+[y$STdg)g5əKᝩ]9'.8pq^P]}IW+fhf?.j5ч2*nAiÍ%2Ԋ>rsf.om}G34J+_i=7/kgL]YoI+ѝRG/t70ma0S̢$ۋY)IQE jYY_ՃëwldR&smvz۹-33Vy;,lo{1];GE0=b~ţ`*fIuMٹ ׽ൣR~K֍ZRҹWfzHX{s׷%TdK9'Edy_)cNǥꖱş_:ϡ[&*[p6AxPJa E5G9;e]yV:5*asq;7W@$gӯo?zOOݧYxe=)p)  <wn?"M Mfo1!Mx7+nk򸆪סb!0n@.a>hܖ0kuE4(=5 W'i&4ÁHEH #01%rQ!&ىΣOImp0#,E41:yM.\FKd d 5FPRrxv,c49Rao^zijV; 1`Нerh~C<.hRG7#FSl4g8ρyq׌;S*y9bIREUqãRkɨ#p3DaJFA X1Eia8nrDN!F aͥA % Ʀӄ&$|^S\ 1+A٤ X;YkGzOXh/- ނ, 䋳+ſ ԛhㇲ,CLnEy 7Lo>gEٍ *SO?s=pُvٿLΊ{kDݽw;ۜ+*82Ň8~TLXpox:aCE9ޟW,"P { 0[.ϋ{;nd<unL96_,|2.l&Oc(cQSgc#S?٩21"bsrlZl|:A \mX';Y9q܆J&'9:|RnBϥvk_#B(ۇ_3+p(%U?5ys^%x(P vD݌^;~)~g6 +cw˓kN! L9:H;r10f"u2ɴ]RD,~al 7),-Mu(٫WN;pJ4\!r /QQ` Rm$ִ֤\C4`#W!(o" JR,L0 EtJ,qr+reoιh86%w%) dk9/Mna_-٣X-Sj're9ċBڭH? 񘻠 6KsQq W$ց}2 ZݣybU{wkfOt*b˸@UCh=X5,rrӏ?{{^A1}pc$M'i*i6iצj>T Z!F)-: GXy q @gR#-CQr𙂫hDhS\ȶAe=`bT 96>@PH <)Ý)`'SD \֟{'wǕM f㟤q2"+K_?wӒ~.\.4} d 4i)YP/S1 Zir><~Q|m?Jsjfqb4o0@q&QiE -&1Bazdt[  ;G˘f8p Å&xo S1f{K%<: cBɌ'!$NdIr.<Ş8b>QGT1bcٌC}Lh ^Ku[z S_ >a[J/bb\L蓀0Il:U3(a7Q;䬣'C=vܴPG{4&h7lhў/`yyw"EX<âgĽE+kW+0Y {;|NJs)vĉFd „ĵ %r \:OEf!"2fi%62nu4ڃ^^Ҟ-Q DJDaIHeӂp@ L YT@L!b/iK1օ$ {`iiDcFP$aAq2!QFI{+f:ϛhKi-OJ;i Q`띇}"4ڨUu *(v`+!8L L#&:})A+4I 9g+O"'L@uZVI'?|ź#8=z5\>ոУTMn-`*JIrRsZ&F,.2,B#a,"Ա%G,yS_\ΚC_ڌW֍5p !h{)LfS]h%1#{R }ҚKiK`7OZUlՉg ǢCăQ^ؽcO'\Jp]vn)]T0 o 9GB\( Ȕՠvq,t/"*Fh}2gUyʛyT<*[QGY BrkyɪV3a8/7hC&po#hqp%Ӝ(8|kAZ8 .gy)"=6 gP"{USQE%{N6JGm\ g+K[C"s&S""47Hъf2Cz njd{0ѿW3-Ln+@A.Llח-;49\tykf4'h1@z}ЬWnst֖7&y_jhjސhf S%kh"V]@PH!HlOp'q I"ip<Ͱ x CcMwLנ:1S<3+sύJxKjһ8Фb d 4i)YP/S1 ZiCKhߦX@"ZtUF@Hr@)Pre8VR0~(G @G@PuT3k"eRR40E2H#4 wZ%@ʑjIٕpLS?w>²z%K~/Ҁ" X7S 6Z!&J;yȕcJ@mih'0G]I 1cbf3Eǭj.]R6-z ە#.peb|Z{ EeE3rMYtFqj,Tx"!M齚JIˬzNK;XPK ^mgȄ!ݺ S}0ߦ¼-0[EBy d_lc550{*D_&Ԏ/?L(V&T h-9{iTXbG2J& hڣN4`A@J@VHXG"\JlB Oc*5(K)µf5ezkfK?rԳ4D_LFxj}>fQX6EO>נ,oҙwM(KDb182PjF`"Ł0J3 e;gx@˺BYaZ]WCI^G*#R"ADǤ3:IUfi%107%b@r bNMY+ۿ=- NC_ko%PG]tb K/&ͧoaAzS,?%* 8Mcq:|f"Tp M%Hf+~7믒D^gXS.$RSU+ݭI.0[&+.@ѧ;(UQ{Φ͓{ VnߦWV~/fYgk&n2Wstm[ Zd\y\bPf5j#]5 CaqEbyC1d`ZŤE^nUo8kͣ74jӽ A^40`RדW̒' l0һN+ssSuK ct)uM]3 NK(q3_%5*"bkBnU>^1?͏oJ?w~0QכO8Ta >} =i|3fkh&| hKOr^rDw|<ܘr)g Rw0ܹYM[g c= :J|tp\$ρyq'dD"g|<^ l1!$HIfpJ1Gbj,P)eϡa9$$V8m0RS-)ERcib A_ki#'cF@N`eUگ3e,$xiC5ImLAwLf*;̚" 0$.;+ˋ߿),;6`X7Y9 #oo>GYa'+ﲅO9YY_'i&''96w~.(SY>zٴ˦ _ Rût_IvvF^?5Yqv K!ׯ,d.{>\٥ClU~L Eer]fe9zYFo {0[*O?#ŠGpqүl̸&N曛/]8[: w\*@p:jl9b#!S>nuIL(BWIAP689ViD0 ;] Tz)RZ>ė`~JXz l H'ŖRU[d\”gmIzvVK\%9E nSXu[,VQ.$W(_y8xd ιK)U{}%(HB \- عVK/pL`GT M ^ JgdAVFHf( "x[W\#2\*밊ƥh0"2"|lH/G4Xe&r%μsrgꒀ3 S$5\ )E;,ho.`I/b!Jtf{˰5#u5{b{漀%3ͦ p3\0s,67^r# $g\Kr8}R@ S. 6>}%m$}vsl ~Cte߆E|wU7=,7EC1} 4/J jMiJUi$"h9Tq?J8AVvH_I8~ EAv:'v;`qw04 i a*%?O" h}V^Cx7qϦ1Dpnpv ~~Tוɦ'罢ցLBWau9'R+ {lLhbi^ZSvb.O5@:1 _~|yO!H)Ư#g8I27`% D0EN93XhR!tUXwN^.joN^l[1oW p"@u :1 twfUP 't•8 5q9JU3'm 4K76{gd^E_SGhx *f=|謘Y1bF)ugIKLrJ(RTDcÎ_7x-!Ϳraf; Lvx; s/*iiQQHʸ ;/3+mn)Ęw3Yi>IHl~*ޭGTE lKVTa &#Fsqv2(miǣ榮f'wQz6]lͶ\B5ǜd|1mU%=Ëae#vN5kMr c Cpz@UѶ_c3Z`(3*tvY^T j*:dt8 `0Myk/9Bk.ަ vEgYȲ|)7_m~Oor-c1s)[00y LqY̙C:ܦTN,2 30(}zvn%xMLryå O=cF@FXp4΋ߎ>}cfe$󴀱>O&H0t=_~,ٲ- ;ԓk|!'VW7m>JYNc9cycj6,|辺A1Gvb/Ymq?8?OqťYٽӗf @DY~ z7~qNn8/GB8ڃ eWZOg׃º4 nn/{UtgԌiW{mӗxXSRfAu|Q ,Rg2DOL*JJ8%:pb΂w,H|| 9[\M.TJk$ւI<#pSuy:Tkxg̎1;<cX6فcgfr~riMw` yJWk7,Ь=?Πߎyg=rJ&Hon7 TـUv҂NhtkW5[Z|6;: CQeq8i7wM͎{(@TGŃxyb^G,hnbԃL0b={<v .5ƓwB6ǖXn6m ,O34'] -[\o-¦UGrRH)Skm$7EENIV6E@ <88գ]H)$[ݒ,Qd,{)MWcUO?"S{3M>ո8|67#o^pe;+hTg!=(1>M:*̌=:]TҨݸN A^fxAm3\nu=]W^oƽ^g2DAkVꌷwsr8NɥdoM)zcp[ /?Z9]΂+텪F?^9'Q߹u@JWdR v(0olVENbuS4+ӌ뷒{a8.YN FkzQ_rzZ3K &1{g'>N4E9M4*nVlVpu 5-A]0Fe))+M1:zQթQcQl}xHi6hiS BGSHh6ƙK`!22"82dcP$zVQȫi䏜i;~q Vq>:P%$grop 4چFʋIQ^;sc l|2{)XN={28TתI}]ߙg˷K ksX,Fy/faufb'`^[&n(~\T<ɵnEWȯDd2X++.ujkf+_r|VV_Z29m`N"i<*n ȇ,@b23%HcՅBhSI*{e}[ƬZrX{@'Ԯ]m8!zL+Pce0;\2ahȦg],VQwbh&˧;*F9;ir. #R+P[Q@vϗIZR.zVC5'g稭 `:sH6EIr`dRHdA)B,$zW&}Iw=nRMo:rZER+IhPi^6Jga=Lg厹ʡM578]~jhcOߜu,iE=739InWM(lԁϕ0^"3@7Jt]|}Pqϕ}zE{'ǟ74^ `dH󻱧rl&Y eeЕE\y'p3|||6|&o,qHnvGcGS? /K;tBQnE-ZN73=jMnwB3uQIد=g*!uz ⏿"&ףw{R3U~S-3?? Z|5m3A`ETgϜD L/qd3\993q<|&Q:o[1&~\&83 (O?,W5\L o>д%у=V:XM$<ywtE/ED$߉Ӹ<# =/RtRIT[E}6CibD}.ɤ=`x lgѠ?p,{ 'у! 9jA~wMohnѶZXAm%@gK $UVRzvt}}3/߿/EMwK?.}6ÿ~x̪u zl|^$'jȹ.ʖ|d_NCT;KQ`6FBgːi郔N`è+mӇ{)P1H1p0VDq6aR9[cԘ50BSh_R8JlSGLFF'7ZMB9vAA6v!KNOOLf,P W4Z>IcG-8\)di&h@x%)6ýI`M! +O6Y/& 4U^'3 1K$ 'rfZP>ȭ4K/P j ŹFHJ; 1?1f*$(e֚g$\1TN'nW:wD.Dy4ާ۹ͦbHց<-4]>PWWp CAM 򈍅L8B-4w?4><"< %Cv sPgμ5Vt2)}AHGcbRsrߌ_fxZkk#͏=5C4ظɧy˻C/>刵ټ2''#-@6f,<-Zr^ [>J&<]Y`9\YK$R!85eq(ug*V;WG~+q9˜\;Y| ]Q0xE\G,5~MXƑP8őc@)VSi4$Pׅ(Bf.7͎M>~0./sK&p2{ ʷ岊-}˶s\Sjʼar/]}jlz*'7Wu ]Qk̠BSfA#,QkBI'%k%@kD d j[2&?,| EZhQY GT>`C]}x/n g\a=2@fRd#WVPG0u֫)tݠeRkC~,֛ g.8MkcrKKས.re_g߫ 8ft1ʙX^a%sXI+5fE" :&hR&;뱰RS1(}۰\żgN]WiXANzk[k6ss &t~ļq?VZ86p*j8a˃.%>3Y\6tiUAXrQ#I'8`tٛ9p6#&L|Kjr׏iXzTt^W5~)R(j J=W^sW&-{}v{zxfF O 4wyˍZO)p|}3fY rj_dIҭ')+uDU71Ky^< ^uIKЦ*D֖iR: ;UA,.c 6"2I OȑDiB, #g ;3Dz#@hd9q+K !q< KDVV2mJ&!tg-gK>k_fFM) Z</m$$-sXʿփIE- )J;^;z#&L2|8!).0(L"1r#STd CΠVvRo'͙w3}Γg Q"ͩDrY"lAjV!dVZ:sVmn^ewF/?~G[YjD`21yn3)rPCF%diCv1!;<եdHdߘo *r9Ko ȥDD(KT+e2T^E۳G1c{U3 y(wC|NIQ חI[y/?Ht6!O6yd%;hAʉc`C⪎`T-kmV%O[CRMw-:$}#'iDt^kG~yǧ׌YT=C4CwW&pȍKç@u/OvrHˠ!.UYݓmu^ww~kwN{vyM{g͏sc|<0^{uշxmh~7s6=GNǣ>;> 9i޺!➧<{,VS/Jhc3YZTi' Gͻ \#;1cQjesR`b7wd5ؕ#3yZfoL}Oyn3Ac6LBx]@kt*6,{i$ +MmUmB*vҕ"KQC8rgry?8=k!Yb NO=5=ѽ E-k0&^1=^}&j Ad RKwHT3T-}&@56ZJ" SQ4$Q0C>:'PUT{U3V"ZZ*x7W( HC0}Ȃ#e>N*(e],+K6̒,R-!L`?aaEPUIB7$>G"(j/uRg=O3͵8qC&!3ALkv_BGcuEЍ$JF t@Lj@:ŭza}||v һ U 1"YY$,J8qZjƄl~K?٧ETV+4I&&fY$d:khu .z"*D'uI7ai`OBϣSSm k3#1g(`@{ 6ʐQ2#OPȐ휖#3C"(eIhz鸘!(k`9!TA,mfѻVJmYq_q>0Lo_BiӜ%?aQqmOqץy<$3ߏwS0̅{TM]WC{d,;SZ0"+*|(6&3!^Cn(N&8;<6C8tI:χ iZjzc^C 墰r6d>6>BEitQϹ0ߝ]Lr(]7OsuA2%OI죚#(m1XVwє(9 ?=Y98\lz8%*Yۇ\7gZ,IqGĊ/PUC^U,m` %N3,߿?&n}n 27^ȒTnJE{4>?ml !VIײ*6<&J#FU뻏?O~w~OE %Hx~yxL㛟𮦦ک-FS6yg]>yr{^f#ilD% 0 .a^r鴸46H7Ir L|veEP !kʞs>KD2ʥoaI32D)drv8e]N=o% x02gHdpȃyȴ}\^& my-Bn=tIYԝIhM:ZyG<>٘%k)([ Z $3%NyPbNn[NknDpR:,x\>U3%[iUZy79%Y*JCup}Bڡ{U.f2n4>k?jr6>^W2Ͽv_UWaJg/ߏAjiăonue%{ UkyIU Q۸<3v6Q鮒^vf^ 굸/-23w 1Y(rp'ivjUǃ4du^&7`< qTM>f^cZ+B\N_GQ}|NϘ/MTTPũoГ;\ \*؆6p ?T (X15ZS6P$MpReTB Nή_ɠeN$Q'CV$lr,Nn%I_]\>ja"BV;~k7K X!V~\ y_MdV\^4TSȽZrϦwmLeZ{8p >1G/{UHN4Uα ?)y%Ze:X[ҭ?K6FrDeܠk Z{^fg1nuE.$S_y8AD i0\IR1K\d9*0ܻVK+p .IsXMF!&h` #,,Cˀ(Y<:6rYtH)c}6PYVRDf%^d!Ye. ΪS|''D"fb0K\cdA`LlHyK׾2[ Wypu$>S ٻ7r$W?Md4؇n;b)YU-^`(IITv>ؖT&㋃4_`)n\l7cX)PB dt?QF-Yd8@#Z  RW̦q58[X%SSjg$eċ[y~2XjeeI 6*j" B6*Z陎;oڢn+1zS@l]X~zF{OҰ7qE?>I[zsu=߶ӽ}x;{87:wfo)~1OEڂtZ5RMi* Յ ڳAM[8gz)#>DM Ux(Vr'+cNRKJ"ۻuܶ*GQ jCV{%0 ,F^Wk&e>-ߊ(GwW.M˸/s;σ?v1*_] IM@Ԁh#["FLF h Nh!uaz&=<{i~f_z9ka?;bl}sFSscI0-FowFbFs}4]iB4n*:Ozi+n$N"AHKn⭺9Ō n҄-//_F,<͉iJ%eVM] 1?Ʒ{*MmWC10|Ҡߦ_5ˏ7un}]{slo=_M$>?ۯ>x e⾏ T yzQ3:wן+r&[M>&0bgd m3%xg.fR1p!);˩#tbeNHHb"%^Jr:]9\~2Hi. {޿oǂev>8ֿ ބ (n&%\w߁FKl Jqi]:$ ?'i)[W~vW @ |!) @ \:f2$"MI^ټ kZ\oMgƦL7 H3+%gQ = Aig,>k.g$P[@CKX 937.L2S\+?LOCڝ-聾~gџ*8Ζpߪi.UAlvk,LWn=|Vp{cz-ZcCv9w:hc̖Q>NVzU_-$֤2D'R*?B8xELh!J!af`ѐc(zbh+})P]'<ȱ !>3e:Y~-e[Fx{h]\VzwC'24{Ŀ9~qGŬ+!^&ug8~u( 诰R_dpwsSZ^|Y~4s4M>!9>?伸LEўXZ4Y BumAp;@b&Rq.S[)XKz} r,pϠ﷧ è?7o~ه nơJ=i~X7[e%YM7W7g?$cuߴ$ޗ0\1ixOHa-6U2{5gn̎1;ƼcYՇ 3V[k}eB$G3YpVz=WՒvlԇO^an;*XBT*ZS)X >ƐBb:QG9StVk[NRHy) IfY)& IxA1)/I5,nFMerGEk1ɱlzcLF_6N-]+hyGU&'MTkԸsnqVCTcR7ʘq )`$gR49 STLcL Q~[QwOJaobݨwef6/tqӠ?t2` a.r sE.A wFSc36 rHmjȀ58,KblTL 42SL4( u-2!~|2ҀԜ+)Ttxy-ܳxT[4Wy[6\fG]]ݚ{ q P}5h2]~>[WE?ϝk|mσkttt+=o~uwA{]aw8n4̭z χkHo/2O[Oj:}JR5v`g_ҷ|Sb qaOTZ {W^o׸ȍ߮۶~7owOÖhJ6uX,|W.48WA-F jtz"wUA@*YٜRlʇߒ?}(2 QrR@)i1˟Jgyyk@pThW<*Am*r/feP(+ZzvL(}ukT5_'-tvP&Jk*rl5|$N&{Myc1k;[JT9 I 9%2l4doЌ<r/hQ1o条k]v|ms)!;ӝ0dsVP슥vͅ` 3p<-YR0HD>=װbJ_QnE"dA@vĔUjgsw bR4S\QLqE33Ek5fgx`B`c΂3X}U1"H`]4K56 J8 3k {JSȵ1bƈ(Ɋ&x9eߒ0&uk#/&d\r|tBS(Dj)ha3"Hh\+|@M0J[7a>msQ+5yF?Z( ?&9{iTXbG2J& hڣN4(A@J@\VXG"\JlB Oc*5(K)µkYEiΆ6Lfygk^SW9?|âц^(v~{&ı$\ثnϏ] DOU\[pFE&4I]~q$ I=ş?q0\"ȟoN8«GT,I~uc 9eSD:9RL]u/zJz\ ꢋN[lB^rDYw;m|iZEc,>f}/LQCu|(,̾6y^NNGŏJ)KS^[mt49}I.0ֳ _WWs(լ&?4ߞkkm|ms|Ep3)杹4UusK|n$Ԫ_F͹f +К4Vk[b|uK!a|m3UQ0iNޯz_]^vG tAkuX /N00ХG_ՠb<Ǹꡔi汔~*~?ukRL9Q_prAu ݴE\OR nU5j\hPCV˅AH>o~zs?}|{7װ{Jo'yMKV޴iMk'p6,wC< nߌ:'m{mK󭷓Vb$E3`)ycj# c($L,P NziT!̷Y9ND4h# `$tC11;=h$:uZmkk+a}JtYxV%IݙY|R2((e%VrUbGV:<;JĐ=hԗo"+aa?[ F,3RRY0b "j鑳Xc @5TJ*s(joFI*N/TjtCчtZ+n'eZw@؄XY+܁{&LS/MT`h;>7NXl,ر5yQW t!m E銣se5swu]oG]QQr9gOU~QB8\ƕ;ZzGSswuݗ hTQNq< u b.ašm-ݤO;hEy,9kw'M-3\ }_*׊u V SB87ctBQwE}n:Sn#Px7UF}IB\R՝ bկʳ!1^*PUwo"묜8O4z 7b? HX81]z"!S>ݳRW'1!'yEE0hvAc9Ѡ ԝ V8EJNn"%ӓ_]\>ÚIv$;~k o/>Ͽ E=Ǣ>5&-b! "4ZiJc7#xfl0pZXK LOΉ``k}ؒ'LuLݧ**v fxˠ$#'Nfk1emYz o7N Yp*\JޫS/DAPBhLFZm/eɹ#)ځ >J1@q5"*y , b2D5CQ ؼR RYU4.E@#`=`;"p/'UNOs;`Yx 4I)Id@{KMtkNz Q&07\f{ð 5#@nQ,$^wĽ9ӠKO@\JNP<"K=+54 6bYfٓ| p(-:ZRY74b varΕ2F$dP)ӎGM].fOG`ؽtpfr .x|iF{6*~)we[1ȰsZ)\Ȕ$xɩ00XWyDvIzt޿^{gk7~o@6#!-u9#e_2]u.e1a8d_2]u ~3]?e_2]u.e_2]u.e_2]˩;/#% 4e_2]u.e_2]u.V3]Z.'g_e-eH\ $j,A؉r.`9V^ʹ;kH)EԒ)욭4C@*#sHu5G Gog$͙N>'apUxRG9.OS+_VPpF}@% Kl6MmUH΁m"j u烱6&t79[O,'_>=]3tzSŇ՘-o~[UR%Oez}!5dɄXTZ8ȱ|Xx)AB\ufAW]R UJ?&zv%nɳ/.TΫs_Ƅ{R?&b1'#,3V|G:uV~H/ W CWt.ez;>I=v|bO={,E@ 9&VVUq!9gab!a|h%nP}āML_1t7<9NcĠA_=/{v PBلJrlHTIIAJEHα EM'^_T/bsBA'oZVn~ѣ٠ s|pX!˰f%ռPmW}=/YAHPtU^ V2((kL[]tKѵrLioYK/,nER|=)cvO,c\RC.yc}\qfG/Î`$}&g'eCFl &y߁%Z@U$ulO-nfIT^^ϩ/j%K߿ N]B^|1\X Cν:qVؙCJG268oss񻒯OI]_oQO5-~}B'/իx|:yɝjˍjO|k*%,h8B͕fSҩVM.\#KVWA)ɮ8[3Ec˕dwt*]l)5 #2vgeR6 {Ue{A;W'UM~ei]_ 'ˋo0@2$lLJPTdlY4϶"zWvTHcn}xAl%aU ѾX#딌0#QS."UK āsՠ': ¾7cl71a4E3I_-ڀ[.i@^[ Vrj+{`l#i3W=bG*|4Q;W H3~ tJ'1^h}N6ZFGoEno|j®MǷ~oV} ې#cly;w9~a\.ʳĕB}=OZ%+fyR_@ bd.G$Xgw!U͏;.yJ2x<8м? a,a48wBCofL]yTy5WM2]m/m]?׻_VqtG[u4l^&9\\.N+78_~:Z\DGfcou˸vo86}>\\z88ZSzkݳ8=Sy&-Cꕱ^=lOZ[>ew\y[LS6sL-3ۖ_}.4SWhT2NzSYQ D뼈mx>2T3r!B3޲:ĭ#eK[JJf+9]pD5IgA@VB4/E:nV( +TՎ؉>xꌳn<g#<$3{QVA+`![L|T"R.-zʃ)2,AQ4A]!_?C׆v_:'A#-ٓOZ$6"yee1@n?4'juZd8 &fv ϩU E74VgtJU,1Z9Fc8tE7x/d?sEw3EjfPHlYGʼnUM:FTcB RhE].j%E/|Tx]qI5Z9:"5Kf j\J8rWXg(/:qEjkg7 y*:Tgt#9N1d\iJN}]-tsW˥s#his97+V3ͮl(Z2f0\ǘEX|~PjIzG$MT E?mאHFCٖe/#%< ^^-]5a*GBaõ:&wWskh ?~>1uĝc[piU=ݮdf˫,OP͛_H|BњI<9:=;OT'LSq`5#nSmex] x:ٶԭ'R RF9hU&z6$bLFW4Ibrζ;I?>S8k%?^¹!4RvV/l1F\]jj>hR:jУAhb,Nt*1>Z)!)m1INQTezI75R+Cj)+3|v)ĞvZG?VǏ))]U딣"cR YU] tPP-՟*B6`kJ mǬ:T9լ8qblM; vh2gY_EI%O< M>*!d ̆"h1V*X?w`s1&Cٚ|3^Ԧ 7Bb v:B9D1,~+'!,u!I4Ik2X->m\))kP"Ȇ$(Q9PZΉWxۢw#WA a׬ZIz䥮MRBw 8u .pp N?`)$”޴zSXabUKZLl/G".&lA'/&Jl sJTa+rFPfrC Xh5"aBV`ά*(HT"XV&PU]:3,eOaF'ˮYEF' 쁵FkLš ngL3{A80QK`yUmm*q3#g!Ly$*8V-nJXq]A}w +Sꫳ=%, )l2Н v580e )JXT29@TwT$MuV HDŽ2^ LX8ҴwmIW~\yl`,/^JHHuUDYt۬d1*+2✬8Q(}tLuW[T0tѷX/b$P"ٵ Xm@ԠƂBE*ЃF<@$9-)d^@T63f:o2GƁߓA^G$L_r/N<o- - h`ufP$ 8ъXwe% 6MxY N; op`~vhpgYɘƆm. V_b=V yx@Kx1( 7Db& \˸` n@}PE@ @z1 hwp!YB` @;1-,xP 9@'@!u4)*hNGиcy)B gu]~cm\g 0,Q< CmALj ؂޴`D"$c<Yxa>~+ )d%Z3-A;utЙti<3`J f}޼hPR*\GoeĽ",Z:7(̀Ia $, cN9`m7>ȶ=xye0q:joz{L;MysLCDPS\tB#I`tdmtLykܞŠfqq:cZ}Y=ȹHY`4vfc\`s 0_i%<%2`vm:yW% |({ S5H.Jdr A(P`@Hndi'O HloجWɰѸQ!|=/LDZR޹TO.D߭Ftך/O `\* !*-FJQi0t3mU%A-0 `BDL ѱleFgl,M+e L#Rc \,$->3Ɇ{%&]%lmh['k3J:8o6u ntvug'yvn}˾H^tW=bs,RS/2=9k#w6O7rݧgdaK[O_mh0f>z1鞓o͵s6bdW~]{O FycnLvoytw5\|ω7s/=u;Z]\syˡ~ZinyWYiuþ);߽dur[ϵ(\,@'Ϯ̗(9o\=:b@nxd!xaswX-?6{~;cvg˃q.."bnaz}!>A~Ui?$̆ӳW$I?[)G;`}3K oN~ĐrhINuRr\':)INuRr\':)INuRr\':)INuRr\':)INuRr\':)INuRr\u)TI=u)^r uuX)[Rm%@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJJ<;ʽ%mQZ^ +)ޤ9"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R%z;ڈ_O.;XQc}vk"m MA4 n0C&2X#% nbNZL |jٲgD,V3B`&4þQ)eJmje4M|./AwFPBibI+>sun(dBOf3c)VẊ9E*9oٷU3|!Ƨ43Gͼ{\O[tuF)}/XDC.z|ZSM;@_Py`5kq=Jz^sPV0I]5O.&nO'D=-׿s1bm_/~݊dv3{-/}ʂ {g|5|oH6Z'r5j&w+;;/Wޱ6fj_؜z}Y+rЦe~?jsn쎰gV pstWc0/uWmϓ_7]~v^x3g^6ΖSM曪wOW4SٍI>l^HުEi i iSZ 0ǸIwM]V 4HWW ײ.tIa?? @}Zz2Ft)gS`pgoɉ8O+A@@B'@P/) ! CtRtze4;;wiOg#ksw4C;tuz QAhli.7 e|}~t7SC>@ }94!U$wQD{xey7W-xpmA{;j<{-z":={W?5ucrwTdM"kʷ7HATpX,_Y:-9{6fy+#7HHfqRÙѶQE& 5'mV&O+ta؏3z}OE ]` }Ol8<JB w0iDkUH>Zۂx֩[O4iT(| Adp <銀Q(r~* j0,Po4;}L@[Hgc Be#"\Yf\U-֪gA p8ͳGG]۞6>Oxɶ= &MeHJX5?h ]?Oge)So4[.t/-m4sg+L:)Yz0UiE.JeI%κQ+ WvAғU-9ҝ_>~M骽73|wӦfƞ`yv5Ԟ<}(/TK79:z!t$;ÆFGHy^nm?ZMT5S{:J%ھPIE暠kM @UMT%wWsIVlRמk w'?} ֻy ԥOSCE8,~`E?\3 :;JyJxeienY0ia¼Ŭi/hQՙɔx8ƈ1A\eH#:MY`V!j[2Ff"JiHǣG,b&wNo6^y|nW4Kz /+rm dVDaQg&cR) M$M{|ʓ49afɺ6[/Ziׄ#బ%4WBI%fd.Y B5:І1u1/Xl*@⭑d>mTW95VBh/մljz^*M|s~e_gyDzfW_lrٲ2"X $FǽJ2su pqnk{O/;>;R$LT -#ssHBCȖu4F̉V,R8-dNh;qvnϤIIds!Zg˖5r䳆'6S;;mݳPKJ'mX$نK`Hᢴ謔$DI^̓:h.iBN?ʗ2D͕>i2JI%r2C/jҤI`RBRkik9B/z/7ۂ="\RNJ R^KZ#uEi Ju"{,^jmeߢ|\6mI4Yd=\Ěbɦ[c1e.]2HM)D(e#.)rt AZ]+-8OR6KR%) .S3yt);$eӅ-^ 1#*TaRٜTA¿=(mJqiRz[iLJEy.0d}SŞAOuY®g[g;]n~M佤MKGP@欱Ƞ)^w0&i kmxuh)oh:K @k -b;} H- `#[Dvj V$mБ@Q废Ys3V˟`ْBQFZ@Dֺ"ɛbv> Pj^Ug$Y}9k_#^ .Km,MôU+rGp6i8)GTrPxYjڼLƫepĜ?zLV]2R|^I,&5#y֎N #a;bDQ4eN>=wڛpZUG?Q=1b卋%bo0] 1_߾){ן޿ͻ\wo_^x@e5$(p~yZ<;C퇶iҩ-ͧMq5gm>qv{\/5HЈGb7 .y:"^&,=I=o\OrR;@%"92f=)DY24̄L9 SV,c'=a.\0IYb̚ ^<1BY ))%yW\KH(}iůWIYZ[_IR];_[]vV.tyBw3smfgR0O>[[3dP+V'S2y`<;*_"XD扦32t@e9m8}hl,"ICGcڈ6e8NևڿǽPfjV*bbT5qN@ ˦Pթ:<ݡWU&n<귓0/]5>&ˊ3/7a5#U|Acjǃnu]uBAua~ Tݳa6SU=;!rѴi8:u8k{xNišmxבn׽]Qn@U_ %aZf~`!&KENCgwJUR,>wSyN$aӧj /^n"Pz{MDm(B\֍_#>3~~?Mui,N\4 3Õ0w~ݟYD,{=zc݇{V*$1$^霜ݛ&C,"t.I9~%lyj6\2?:%僺X,"Xᔻc& Po~(ƽcRRO'ͪ)˽Z2φwm eib.]9(|yA:؋GRpO3װ7B4=FU:!3A ח\^𣅄m1OTQ0'fxC=/| 7MfLHQ-|,3HKz",WB4Pbx%C+H@X^Y@|L8]pO ZgHߠkJ.W,ςr4ifRf;a2]b9L',d ܒ[=%;UzrvB\ u  j M_0bOf+ 3A*+qnWџf$ 6 H^C DuJR\.>rL3|FQ̧l@DM (3Hn:}x1h@B82ŝƧOs,>B@ZZj& xJBؒw9:,g9(%q ,¼3O#i9r &Nmsn |7Ϥ8h]xZ۠8s|KRQp1BtRk.1,k->+nF)KnlخQ|\I _|%)I5'a\od\ aO-?a\Qh\$y0ׯB O:tg`GJd>46r }5<]^&IWlc/}indqzM?jWMJm5N}9͘$yzOI)r}%5Pm5dtD#tA~_ToH"u Nyzo|cOz lx(-˜Չ\[z{N)k2{_{'iT,m݉vץw5SJhFD_ gUm]u1\ hz9]o.]"˓~'ڢtuhbHtv;xn ?TjU=->/;ĸ5|{+DVg>JܴnC5ޣP\"3@R@PڢA+ eTО!ǫWg..A>=`v!|{rUn`V.D  pgΈ;O"c,zgŞ6v%K3 ߎQ{ ,l!8A(:H% 6D*HC&D͙ W.QWt.*`y Fä`"CflHEt:9Bp2|9a#5W!n]q('XHd.X#Gϣ1`ta YrY甊3m'{l F˔He4'5*o$"*+e,C.B)t>Ҳ"mD#E/;2 Q2:-\1F L$QcK4R{}a gk f}u/ 8IjpӄSNW}'YiFI ){}Too%=gxCϣ"ðgrm(Hrz@VyAkzgN,Jv硻֓bΆv0ǚS$G˜&wWۋ{۸~_y\zt:<8[pz|;9-۹s_G&<_ 5-Yfֶ$.57#f)bLAF Y,;tm.z'V9tZ]7VSOu4,I|1jrn0O1{ꡈm`~/V汈_?Џ? S\\зk !*NR .ܔ,Lju-2pPI!N֫л<%%9}o~*???_x 轌R(Ota.Hi׿ H! ȩvp..u=az2F7sJG}Dn[+w#x-(vqV:4|͘\_Ck2kGVǨ$qpܜr6|$@q,ª.ug?;Ŧd!٬u b-l ,pF`gO<찓y6J V2ƒHټ/_E2O4~8RLFaEr9wNJ9OZO}zx|_=xZcd_&ن7m&}_YՃ?Kq |Ԃ9Λ )Y(ؽ7lYKZu(Kz[~܌ °8 Uóty"3>:Po]f bЯOhjtrx.7 ߋ0 ~}>2iW (X9*1]z"犫Sk##&8)3-T\ NNMA 6$R&,i3ەG0V:8-\IX,K MB"Uջ?oC~}հwBfzHMj. "4EZiJM.#zfG^V?3*Xz)yZ! b؊֟Gv<[aݷT`ڭtp;ȗZLam^7KBb[ xON9 JJXrxb%$iȭTjp9;j{)+جH\:昵6MMQ+FK)gNb5N Ȋ'V B'BfKE#)!]dY\ie ]rcꒀSǂ HJI#A=AGm@iծӱұ4N뎗A׾b&G WYpu $)XH:GtA-'S+nU0"VKɔTj-δc ϡVb9> 6s,OZp-xk1QJtLy?\ѺB@^wfD ,>U烲4WZcjҖ^Tiuϯ{wG?q(t>}|O&4 y9oH-\ 88uхjJn5HRVnJ,Қ]Pa푻ZIP. &ƴO{8h fM@h%t` @繣  eZcҚ' "aQȠLک -9F J0:)F]A77grxmCpiZB&i]2O̒bt^8ZATٳP'Eb 1KJwv" 35ɺ?Xg+r~7чUݯ:kE1TgtlW,r:bR;b#;b#;baHveZOw9*;9Ky_`D-V_{y ٕLέ UFޤRKڔs( )fIqJ\GD Х5ִ '7, vHd^> _0\zH}0U;ݥO7;'C&G=Z NҨDVZ 6CMۇ'DXȺs9.ЗQe ;|k?DJW")]W1e ;?ϯYŔͶCZ{BZ0'wᔴd[skZ|f}CBĠ&Z(j,\3gD#-([D%CIh&X:3RJc\ ,dSZ`̑Cgx3KvUw Wx -FCt|ϣL OOӻ3# -b6fQ:$RfD<@|2C*P2sV%˘1P8l w֐M&'Ǵ\~pT9kӮ<+qW(+?>l;!|_j1,+:rO˝YK;rMҠ%fdfs.7u(|"=jt1Č ͌QȌ!87W.ɸY 2GQd8P>!(O$Ҵ)&QLف۰5#l[so_ov F깛vzEItB@M 8X#bFd*d2R$`j%)`+ 8L'jPҠ& !Mx4 <l~VsIgwCݝѵ˂&k9xJjh<q I\[g k ^JHadx{qYxI ,+) Ԟ-u>deYsB77?{WHPO3@M2xlbgSh0x؞ed5aY(+]NʇbR88t~m~.fJlbw-m4?M5v\xwKU9HH00QӗŨ \s? (ID`&qJ20נR+D $M=JR̟֛.BbOZl Bhm&#w<(c"ΙͶ18g 'I,JqANz(^'BʂdGt?O'i2϶m{w$vy'w5:MPZF9( 4"4b߿*տ-R(0lWggE F =yih 39'$mHor_n}RhR̋DB \3ͅ16͗son{0K盞o:7kLAVs>eÙ=cy<\{]WGf)^)C=O<NJi I諾jJ-E*6l Jڶ+<=t LtK.Ix/^|PMf]q&]4Ҏ)Jj:r{Êfyyv6l}9l, T4x%E&^7llveOhL܍yg3ɔL줝ǡp:ݰŴS*-)번G_wԴGQ;1o%I;nWw.rf(m,ZF!)O,%U)nۃP`dwC5xyiVayN_VMU.CujT{UN&G4[rzbҌ?b{vWՍt717B!h319}m0SRH{D14B9*G2ܿW_-zP^[ݳ( _jx&~ZeaE]J%#:.e{t&sȑށw A%o7̨[Є0=<C QecD& x"VRH$`qmNh\];IfJU"\Sdwo1yxHoJQEE^rh:`ǥ}Wjt*vOs{t3X'JfhmM޴+rk uL椽Ʊ[9QU*oB~HZ"0DTș;3/!`Tf*59=:ژ ZY;ZEVC}6gM9WgR%clQqda-S, ]/ 1u-ckۨ!婭1RJKQDv,:ƜF gPI^xrv RKEZ^tv2[ikna]3oE<.0Qkh) o" Y 4hЖ;3.a!jrNq* KQ.c 6"2I8wd&P^eIYYgeFЎq"ĭ, x',in@ lS2V|Vm9;YW_frv ZJ/$$)$} $kH:-R$yP{{_D&wGnQ>zRDHK.(F@pL"])P2V5nI v=FT%xciN%%†6jBf%^JVGpt,{5xyBO7&.Mx;R&CȄƴ%dR< ٱ >9mhҞ67y&D=nǩ)SAw&tv`R.!GY#) 26F,X2Wp}I9*ڞ=*uW5 ٗ;De z;)5tҴCU&\i8v3Qҵ(-(y(>٤f1D0ƄthU!3!qD0JF[GO/Y.xUI#R~ҖWa;!%-"4 ͠jGeLiN; j]5Z }_ xݬvΒv=S[iݧnȍK(k}]۫~ZEf:bIݪ'Է򴛶kuϼ4r3=67wΆ?b[&>]^*ҧ`.HsCw5Fݵ!~zcMZ;dsÜn L#޳(D|f25[Oyr01M&.COjk ޘGn3AlD ( . >@khSlXFrٗy .p] K10%RԐ+G-gϭ\F>֔>~9[<+9$K7Xnt@ E-k01i#MxU 0F^=K_.Bߦ/ _o8X(<|b_~qKZ0k)jT gۤ$4 eVB%&?o`([ ڃߏ¯WW׋MH_ʳ?󇏻m^/!gB7o %Cmzy)rXQ=as=#kMb34;S!5<  [ <0Cxlu}i 1"YY$,Jxx1A1[I)@$n-TY&2q-DƁ:sR$o!q&}f2^чz|}~^8%}:+H>\QVϾtGy|H"qGc8̨u₣, &' !f'P+G$)* AGt%'S፩]ع&@ۛ0G2a0s/"J h] %c;﵎9דQr? =jڅqsf2F̉ h` 1 UNfǓ'dv^ʌOCG=,m^3e-dsB\  X:)'ܒ1P2LpƁkvrf4p't ˓4t,ѣ!=,4*넕{ڐ[V(-O~0Jws\Vvv_ZICaK wƸvvEG3Jsm~%]qN*${R4gLٺ{z? 928{f1)sto=풃Hm7s?Zl0~v흤zNn#FY&gZ82PHn!&.!b5~J)R!)>DQ&eOHLWz`(\YOߏٻM6=2yCJ7Ά8bo0]ٛyO7(~߽g^&ѣ:oϟϚdϘ݇Zjho>4b -Գ>#q {CB4b@[tânprunԑ'I}zRsLRT29J)wI Q)XDEM]DIӓ>a\z8|V6qN 9)wA|,ISYX\fO~%Tj&d+~]嚊Ύ:~E1лY(TP:TDߌU7 G.aqoUa5xSySwUK=}sٷv?LΪkc\ݽ{Ӓq%QOox7E'}5~Λ^"P|w{M8[@ϫGA}9gLPn7eԹ1||scjk a }?RiV X8{1]1zo7uI8!6IQ1997 -1>hdYſtI"љDP$W<ݭY~d&>")?߲7ոwjVIkv+&PPoetA~Fg4s +c{C˓J Եr'u[\z0:FCrXefi}vn. (txՌ!3Y̧jxhn['A-B}C@rH Acx %Ar#2@l֤δZ.d1bORGQ@H!hJ ~Yň4p[0kU6qcIok`KK|$Vvd}f!k9NxFZNlβsroꂀOƼ qFNjb8'<';n%x ߂hhlab#Wf{Gw#u#9V2sMf+JE=K$73$ՉEV NBm@k9]gU~- -M^q\<AԆ' բ[)GDN㢬)Ε'T8WD\ %>׉$TQ@I) 4SQ7^}W8K/+EsoFI<~akl[׺zXȇNmv*y`x^57Q?IP d8҄$R\Y[.Hœ2+{"PPFEC2R:p"AE/itBj"Q%Ol/G$o.X#B zQ,7fy$͒)sP j1~SKM7tB$//{uY@) --#IJ9Eoo/H/o K4Q!;DH q@-J C <Ί_Ɗx},cv;8y!Q4Sq̖miޑdڔzz(_;b1+qG^)!j!PS4 .,B:HqrYOF/`xYG7ݜ$P+H!g L.I$%reGQ)b(Y #U\ fqڭ6vb ɤ-6w}Fx ZgΙxF]]65fG#ͺ82auDbc&&P0l]W EI=I=}0&YHږqgw7п˝[1wn/<-040~J6-s:308Yjwl6oa=|l<'4kx)Av<oZm0y 2n=no?mh.g6]g)ǡ]B(چi̷7:ŸC{Fͪja(]rfqVl LH_ݛ!vx}<ͧEQl'Rـo/kr{9i&.x뜡BDMnt S76^wP-4P>9pqoq2]>H!Kq U `PoU*$d|Is,ie( 0.ւ*'IVMg;Q=Q)$yTbd_V`2jjK9E˯oPޠ˭FϷv!r(Wޞ~ 74E}1$p`8pJ8 pgs/}J(!9 ӢQiHp$QZ)ÙEQKcƠSbP8AK6!*Mg;g܌R K=yᬈJ9^h:^xEl6iNo6^텓ǟ~bsOԶ{αS GbGduRTAL&Í`:70 +[~J$pg#9Yf\GnSHmىJ>&A.?Uj:9’ry*\ؗkӲ\ӎkwJRb-@Az@+ 1`]G<-azdl~&:{HcFSBTfR&$"(G 9j kqtQZGiMa -!c ڪiocV`#+ilVsO=% "CN`Ev҄ZڠBCЃ_1r={28#zuK{tYYQߞj f-bRRpzU"'护,R΃!<R`(Ө#] 8xr}ynSR :S n9f#0*\.H O\N'I$TA'uXOJ=tw#0Lwij dKo-^PwtI2:Xݭz  j+6V%]Ԩ{)!E>.)N<ޖ4]ϝpRHgW-Ix""J*YL0d@FӸ Il82ur)%mA1li.`;c":OZ#uڕG.5R0*w4)_uiL95q<_֪mpw]֒U@Vء#ń8:B ZC!{YJ^gj#0vcfo'do5@S6O<YN(jzdr5vy$2i#&H. 44TB%tġ @bypR8aC gʐ}n(_z gD{F=v`祯 oG 'o?g,Bc3D>dky#/w⢣T\( Opkcm۬T\fo:yQ?^&0 -a mцPd/ei4GYpGؽt LO ۍ2`ui0W _IZOf@{~ϭ;n.ʼ[6zL]ɻRe\,.^_'W7XwGN{vۆΝ7Qzݱŭao_5s~5Sr%{B^x-w4ǯ/ݕHo?ҾKdpES%@x`seJ* Ji57卓o D3B20ٖ򓣥$E+}s4&(c9$mN)gTB&(b,!EkdhR=[ieqc*h%{۟qoR0\(xAcÞƺ^>D]jh<+1$ o>n =6TѣtE-a2^1-^}"*`Ib&t˟"wsY04\`D$ >1{~HԨ3HD  bMq4 =R3+PȦcsm "5,YydcOL.DpCZ,-,xGI4 GFju~:q vT76`+n(Ɏ6MQar^1.;L\|$W_LRF@&%KX~Y%(Fc)2 :%0FwގIvquӧLx,^rjcyZBKɞۖ<QT('FڃA JB.Ud9c@!BjiŴwGbgGdgGa4UkR: ǂ:mf83Jn)9-,Xcu\9DŽRhj<۔|PGEdQ0A[R.k%#RHDc܏ACKI'X2<6bt!x >Z4Sx1ǣn/KWg)ͦ3Ef:.|'{&P-dEEtԣ:n6(W<;O 1!?\uOP iZެ~fW? E6;rBV R /r^cG{ u%P|I~¼q 힦7qaG*t!'pTxKkLy"c BCeہַLљIUQ}uU1.Op`Q{ʎ1r/ZLcmaCυ`Ͳ&ֱZʄ2a^!Xqc%bDfU _ez.ݧLڬdA43@ )(uLYk)G;gO%P.:mq, 뻫b0 ]w^$B1"8~_ t`d_ԿMWiy[9֔ )v $٣ ßc+.a 9Vg^M.O52S:MrƫEp3)杹6Eu/.'cK|l $Ԋ_ƣChCK-1M͐0ч_(4i'@6Ѷlcmo~MnzV!ɰjI sX>C"W̒%wp֟XݔM+u3Sݖҏ/O/ St)ts]'\Ld6| ƨ?閙bvMRhˣ XM[L>\wp Br_翿瘨W > ><w/7-qSMC{uTָɧr| c]MGk_b? )aVbzeй[92*7Γ"\ Rh1F5ȣ7(H&|NNt\d('=V.y%=_pu1h!=6r<7h FGDFYM4RuU_>ޝg_sSXՆ|"r+y%>p$tॉ  p'1Rt3&6;}~ȦlFTcԚ2ç1lAHB2W3}WYfd~rF?| uN*S?d wm ZiJX2@Y$v ~as,]7@ԝ3k}ؒԧә)OmSIztBWoK>%9?t 7)-MۛFLHP&qwFPvRH^Ebx% 5BKd6:B*6vx$E;G)# kaQL`Y&R;U'H :qɛLIT )K+3{,;Ns;`Yx 4I)Id@{KMtkNz Q#aK@־; נpu$>#Qbʪ(`nl7X1pk"cgmI 9Zr;1%e-L`IV)VDm;nlCz x^ wE%K~uM^%rCl]䉘WpqW ݆O gFp]oAP-ul0ȕ~27WE`PL}_k87xK&OTk6I8L=ѥlJ[R= 冐:iw0J `_Y (O9-r͙A6ƒE@mnsOO#ק_l:jBrpb *A]{B J)y1Y@(F5j6RpU Q[e#m}𼄕9u2m0ޗ.5}Y`o-a>%@&Yd/prQFIb[m,SL3~tSŮS\k8g}8N&+xs}]3UaǮ-XGdgӓMFioO..5w1#67񓺩``ʜHǛ17mޭX'Rߕ?`yW~.)Қ@V0!":x1&t&ފdE+  9^:^82#^Gb">|[뛜W)c? g9}'}^+HOo]N%+ vRjB^O7> ݕUQ̸v~omVMߍ|e=nzt7\cbD26hl>;5>J(Z6J>Ծf=jWm #}Ѹvֆ\5SX!^pe;+hTg!=(11}CqX.R_㯧Ez8ٞV0{xz('X h^xET+anS9#&.G3YpVzPv^ԧ_äv y֨BI{f虀HX >ƐB:13tVk[פNȴ%Uyن$][LNz iOWՆsfڦm/K-$y󺚜j^9Ѽϧmu)h9%;T~8D{#r䡪V>~jmA =$dș[q!fO`AfTFHYə%zG#ʳ^DNQ]&%U[3Va͸\F֒uᲈJ=]h{]xul]lwN=2_4hh42߹΅{hQ)%gu0D/!” QDS24E{MP:PeSFcfԉ!e]~pkl? +Zty] =T8x%Ƒ I%0A#A/X {7ij9ߴKX34!$Yi^V |`RIq Ʉ@f@ Ns %j ؾ#G9vP_[hUAMlt;|1H~8fV VBc-fŞ"#zBjr% FhIhL!Kg !u91PF;Qw.4*nNl'##9+ P}t;|kH.6Tԗ;G)VvЦv9)iaM֫#z:dMA NIg/ ȀTZhAX:RM&^Lr;Urwz?.]2,Vwkmm=H1mh fh'bn3dv!Ol/q9?)z[YǞ;V: \81T \ɥHA{N1If-0 e G90^2YZFAomTDC6*Z鹎I8o|pZ9"UԡrpF3Mi|U]Bv&._+?XUO|֫lud#Rtpqe~UR_,h͊U"BN s6lҖ]7cw hEXHZ;["2HLf##zf}J:(5"O!*'I֖L oBhy%zt!vj9L.qei;(\04xKij&Ӱ'n-+t?9&xr{GeQ\wtԙ']$l0tt<* Ve$efl_'i]Hk[ {J=q8FmeF$Fruf"1[FF& @\Uřcف e9q%fo2P!, 2MJڨvYj9g-%;)m=Zr,RC9[N -3  %jH2 KLIJx!P$TG ?7$;nS$G 6HPy`-D&Ƚ"Cb SjT+GIkv;GtJpZKZ9H-K (Bf++J R8(NULQБ(LlL)[ EҁJؐ-e҄&hrcBDӣGv?MMɶN3`HwI(:d^Lם~V []j?_,4坿'_Wy˴ǫXz~:-=y゚g?)F'I@P_n:L2,"VG̹y?+D8&$.HnPx0QڡdL P>cV*tXGq#Pvj i&$+({pn77zڸFim'1t22K?Bʗt]7ƧcEsWM ?L]gLoFgd|j-޵qs4D^nrg\34V1ҝcQ㌣t FtY#t'Y꬗6OK}#%<7aP6kQ$MBx]f2ZY٩^HٕTq}2M*s/@FgK3ʡ+{,E^m8G^BfoVo< ٸb+N,xI/[ԏ|ih*>&0$竗ᫀm%$Z_U$Ņ˟"4X׷\>U`cr0xB J{28c4xB ΂;B/JJ&nu2{p(n= syVUjF>ﻟz û&*A$!=?=]~I]Y)2Zyl7dؠdy !rΣwcJ"'+cԞk5"%D-O1*HmIY2a,)#AJrRkRBi3d:ETLBZMGmyElXk8Gn+wMo\x,zt(]$'#gLČ$ t‡.,xN&F?|8˃bz Y MJ'*W#T B$4PPaEٍ\"hl$ @؉@6^ͧѨ[`QO.8]Z飐%:Q(E pH֑4o0fnS*FϽ_Dgd~c+3#hA3+Ay&RQe1[4X!د#tQK;"2"r[ȣ A 2 5XҺ3cKaQ!呇9{uz/ &~M~>a+(SZ } up?.~ IKHr&$ѻ4*`ݹ~f/=w5L~~6<_dEOM6nV|WWܐ1P|LpƁ;vupftGW9)۫4Q,ѣ! %zX*/u ʗ a2?q%:5L\Lqv^J;*,~X; K ZhvjEoGRq]yoǒ*J7R߇8c;xd O05d8eg}S"eH2tU*=[(u*>}CO~ȖT&͓Y~f7޴8G%hE6ڴV!+W(Li$ 2M +fɫpq6uJ>8U~~?J~u ɝ1%KHSO Fד ,2sukB*1Yb<\oxDr>zW/^^~|3x/@\7LчOM`swo-q[Cxӡb; q7|60/j"8](۩Gs0%FiMKOb=@;ד"\ "bLkP")%DA a"gggEb68L &N<6-gFb4H())󝡘YM4RuZTF7hΓϾYeXՅ% !,8L̒^4|R0((e96Ǝ uỹO;g =J=]x%,s `o8#%e)!p.95fTC>a䐤XHML>}J!lE N&x[}M  M9UR/"҂%$xiC5Im5=>NZf:KW/)<|1$egW)ϟY^L4CYf)CVq<Èr9gAVCf8]ʔOBg?uc.;,,;0,| .A.I,"?q1i{mgSuu6tWH{ hϷ,S6H rDUeges(38zW~N Ee2}Oeۿ`&y pIUg""M?h?3ބQYMnf̷a _[D1 $,\ZH bTN:$&!+$_뜬MB "ˉVj"vB*)RZcmx-90b3Lyb7ې",0Pv{l |Cte߆M|wU7ɻT nbZyHw95llZKx0mWVJofõفZgֈI*]Dipj\4c҇-(?Tç/ǻp0U=wC>: gi=\Tb'`jmc e & ¼&BP5; UTsc%bDtxejnVj&`v)kT0<Ē2ͧp-sӇ} ~Ղ%7ih 5(!t<{+?UJR )!9@t y1YqVšV Y鞉qy 's%>aXzr~ܧ@`}}$H! 00οFMj:ćXH3;˜зeA}Di]].E)]1e;˘(l;u%0'7 ۚV>͌׬m\1T1#2gس\tK"Q['O"[z꒕.4"̼drΕ2Q"N:x@uz7?;EwG$.ZlvK^n=f|5iSU\pD2DMGp&9KN!xd=G2x#6i#(cF1ZleFJ5E刊`)  Utȴ]Ӯl8`0MgϞ,}u77+)*r^iG,9T=f"sH*#cc*}TEjr R-+S#UFF]y2򞵤Tx|PbL"*Pz$lHhS}1 > N<Xo`[ý.|mzrh/4z+z{&OG9*!f6i#NH;'\%š0CJb`+XoĒK\}L]+]弻??ZR&)]GR4>f|+\ u96nRix`,(P t/8SRE{/*]Yˋ ɶKobɷ IDoƽ~D$z;JSx;2{)1©0ل36+(0JL>ʇ^+n8u~?-cH 1z~żA拁Y|LcNkaXիjR}D$Y7¿jYOmz*9l!Mu|TAMg^49nOoFE9g0@lFg]-^sq:ͦ*Zg0uǛ`y™C:ܦ,4XLjA=#T!Waԝ6Nt'v6{too"%RXlR (Rva!固v`= h,)=3*8ʝ7\ 3i8P=h:F̓[1Tk%iݳv=qfkMLz f0֗FKO[mF$|g :5p 47D%s,˭2 m*/qwF ?eb|T"WDY~:A6M26m5?JO`}3/p~n0ɝ5mnޯK%tR",n:9I9XD݀ld@;ͷ [\sW;\.\,!nzH 3y=S~e3]V5fG#ͺ82auDD17 u4q,/ #jC$QmRm>u'ْm%} ˝[1ww.-P~-׿Onn }Ǣ{Ìjq/c;4+-}.9J`c4ၧ^n{E{ T~g[:6~7{LQ)䘨"Ǿa8i*-46IE_Q]E5gD<#ޖ '-F<PD{TD cʶ=4J ż$0Y@#Rֵh(qIUx<\mAv6j([K^ ZM'ϫN:;B^w Wp|\UɯI̷W:Eԧ/{ծ0%r1m1'R!.+.-WZHoph Zo`5[|cpӽ Pgzʑ_ R-`yX`f_U$mMk)I߷xt|9,Sm%ꫯbV]O='w춦(/phʻK@K`d1W%[Rd>R*/<9S&MzW_E͐cOEK#|Eoا RɗdyQJ0HFfٍYidZ40ba֑ /51j~Ox=1nF˜OOA]MfKdb-1(E40%ʔ(_˲ڛɢm5uŤFTZU[!( 'P.o͆z6sAfq(j˶oDH|Yu]#CݗqN#{m WxF%m&RS<;&NH`IgX d82FS1̱ة..rl8wW ǡ#qD`:`52dTZ` 2t:{"vr8j[lQFM;{M{vqbq+ob۸b@9aTdjT1Z-˵FAbH JP9Vr|*UbPd&/b 'sVGYM!m]LZ̤/0{ B5ԡh$^i6=-ΩBnQzۗIՁNc=y<5}雁>~%Uftgo/uG{*B]bX8P"||Xg";RLs<}<pVLAVA tE(R&Zf"+#o2 u!iY%jAH,=Ԩ\'k|P-wb<_|[tЍpԳ^8'5}jMF 0HST\T*2ȯ"B.IWgpT'}3IG Su[k:d1P®T'{o꤇k)_3E2KLP sE&P7$$mX7ENVNzS߼3rЕ=AYyvx`۪6Mxq9ԗM+1HS#1#_mWaO hskzپLU_oO<Еzp~W!6@6!!RPNW66ZVfl5g.w?Lf=˅: 9ݔ~}}>&?jO~}M^_'Wf߇O Vv9/T:[I^%T9caEOs*"8-H *B`/~h Q˦0觧)OHP6{vOX<|"L$Ro0B]9F۳,{w<= w =ų@Z:?ȷUW0D+CKa Vc-_n綃 nE٠yix"CKdG Sgż`[. Pcyacf"'ȶ~Ljxղ,tZԈ8O#CcR.ɵ\ dk>Q&:9p8(p s(_ZL}RlUv@a,}\Nt8DH{h*QQ7Ȅ"} X)P]LD`2BȒ~5ǂ旣[Xق3g҆R4Z/,dnZL'U!xKWڬD=`j ۤ{9^)__jL0ݽek9m o ?RQ=XS}Nнsndt?ݙE?OA=UunFꮮ/.* >8ѫ٣7뗖 >Kf"ݬZ)W8BޏvTzu+C]%+@eEvyiZ-A3*ڽ~[ i~3m{(GU ,fԛhK)E$j!xdHieX&*Hz9j7 PA;֧u<./|}QX -vgiQM~W6nGܒQ3Q1GG Ы'VQ)^Y)/ :Hz0;ՔԳLjk<\Um :ltE ,HwX9-(6JّtGؗcs{G:`k :e#iYjUi`*Gug&kTl!&˂b<^Ż=ox촥Jƺ;סk9s+Z&}R (oE)ok?^?d:3z*+t((mI"`=$HA ȃN QkqڷM|49p L|yA{*Ud邝} Doӗ icY'j})ǡcB+xN՟S)h| Lii_̌Яq?3B|#{ު?_t.$G/C AjLR1 OK` :֡ƒ^hO>۟Qdq`P" )Q,]Q(،.u3Ws YʦxͯR$e@ރhz/Oj8U4_9]؄ۺok񛛻sqɡNsh4z2%ͫǷ}?er;d:ޡD{`yo CST]Z+xp.h- r,ٚ "%RA~Aə"4lֻ(j&}*Z*YZ}*Ŝ|)]N4 Tkdl6ȸ [IƁX  X `V]{(ngw3Z&{z xuuj2]*p&ShA)l $(S@Z S'ޤLmY 4T/&%4Ҫ¦-ljvEA`8r|.z6sAfq(j˶oDH|Yu]#CWά5k^d 3 (i5YG!_1qF{ %M:;ƚY'đ1߈Xq b:غhީ_}T`<Dl&"jjD AAY L9 )R XzEٻ6cWH]7q6/5J)R+JzS=I(òhr]SUեœt,ZeΘHqfy"=אLdI 4P ߶[#gF/,%_gk\^Ė"z׋VY+܃Pl83=hݍXv&>9˫O;PzQ!AR?m$ ZhIhLI~{J#I1r ܎1 (jhYUSk?wʟGG(sVh 1*wLLY׼#H S|SjQW,: ^d-zby֥:M@b@12(2 .CP!k-K#SM6KVS'Z;pOt߭D :5Jk&XR~JƔω;Kt#'DwrwhJhL:,+7F H "UX &Qm:ݝ2/R1Kl0l~ i '39&2'#ث`ϻ+-ޮƵ]jۋ/pp9?0ͽg|oUx0pt|3ǘ,UD X_]^ s Z}EHXc tWXۚ Z*=ߔ(6|{M]7&c8 6&FIp1`FE2>I8oKn {܉HpЫb /Y+EQ>UŋNU7giT]ue!S8ߦEn}>O?GFњG?.n-}p՝fo`%5}PkҊmtzvTT ǽA;d?~-@%0ejLjՊ3U2P)Pj8NQ (e={!8rA(2K-[E6bb92_q)I[k̠BSfGE#,'D%%p\bDv]-{[#gǟtei ӥ2(;PK5\ϛl5 cz]a|ML$*b5|H2о $FYNh!"3%'kM97|u Z269wPKg7ZӺ}Gi˒49EEVL,,Z5Ȯ61*m3 ;Nte`uњY=d7X/-Jl Y˂^pLn+d)9Yk2-mF2f(`{n1! 546Fl4t}GCw ->m?03=ύ:3!#0]CALl 1VD$Z F@Lj@T8ɍV=:{uM]J"Y$,JI;tx1@2Ӯ5z%w'̊.<+6y)m<i`Y$nt 1:KgT{XTl YEƔ |ih'Ұ@] !z$f7a^vO';o}~ wmPnmخ)q_f]ȝϞ91?q[g3iJTOI_-J=#c{2_ެVYג{ KFYà]+ >x9F+BN^( eD*z}n|Zpv8ߘƨ?h{@&f|_dmcgXbj㝧e swkeBq;ƭIΊQ{Ia qZJ JY2t' ! 5)v*@!.%.;I$մyBTף*tUoJ˔e"MR}g§rQɅ W0~׮'%ņ}Ed|yCuNLhk2S 80q|Od䊔qzQ,ѣ! %zX*/u ȗVk - |l|ӏ誐ϹX]Lr(4OKuE2[%Zk4ѫW'W'o܂TkmK^q*r>[Rqnv~⌶tJ5CufWWˋ{۸1]뫳o bRs~ppz6][k !Vt}xߞbqi[3fL(Wtn~kYf$1ƍ,XvheГ9烫nNY{xMuݽ*%ޤ8\FĒPUc^UAXhfXO׿ſ &n}w1i|}KV0(Vqh|yfCҮ*UE-Fӏ08#!9?/~,||߿{C{=Ji5$ <<)@WS۟Z󶦦S-ͻ-y%gmyz^+-Q!0n@`]vÏ% }!"N-HK󭷓p L|viE IHN1Qf9Z2̥ٻfI12D2z;ssC{բ9Y#z%/&,, Y$2RJOs\8˃y+,Sރ:u:ի5/UKtջw%gSwS?'u2?uy|dfkXѦP{-M̓` <JN)1d_odEQ:j\9?P-xѽ]WPrtT}~O |YOU(m\'k\7wN lû&};KeF{TC>.K!7l,$d{hZaL?UiRauYU~L Iq8|&q3]JOf`1QJ6%AxT^=a~o޵+ٿX,̴MEz,f3 I.ìƖ|%I-Ԓ,K"me"`2)fϩ"zcE] ù_KKah]}M,\ٓ Pc7= 5h=֜˜c7IIQZꜜk?2N+ erȊԝIMXM=19彪X,*P#in٭)$Xy_^^}}m5'Zc5:lfM՜tD>i5j i|Ku39:E.WieXK!Cҝc-kJH ,lEK:MZuTÌȭSEztB.7݅[| fUE&$PqwD i2maJhr,{xAs p)$@nKټ7veId%1njh4$~ lbyqeadY6/:b:$.o ,ALĶ r&e"GΪcrkoM R.iZD4S 0d^吸,( 02Eao/yYlo6\n#iȸ3GelWj>Mtp¶@wXJA[GǶA?ml'e Yjsj]i% Gc ( ()X>..诉sҹ4Jx./ _jEϚM՘O^TMꢹ],Wh<\{5>M8RE=2zA3iWK^l)vQ7{󳍂cBrnE1ibؑZs}TϻYok+#HZ51}Y@wge_ _9m`y'CZjnN-lT٨Ž:3A+I&Q'-c!IK+( \ִs7#):tjie~9pf\ֺ,_dypXf&FGcIˬP m|f "̌MeOpKng[/FhH^Ύ$Z HHh'˛@9)ydTpv\w*ge'hKӧčDOA8Vn˄2lRBOX:Άr{Qt{Wܤ$-ѕ*Dˌ&B` )JҼ;^;"Щ\WE~ /wܦER\ZabAj/|񘒾6rZd ԵZApyI{^Q:OKp4 %†ZHhBfk%\9Z7cv6'"@WCySȄƴ湙8IA9lY.Y- hҎ79^ںPJg^hg`w5EvM6 RQ.!G%v\̑Kii`L0DWp2f/apyWt@ߢjTQ)ROB+4@lj7$w?='-J8~;r0!qUG bɵ6V1%O0IN߮eu;Y]v BzDq5H0MR 5w33o2ۍNkHtjݔ.7( ř% GLj@2 QЬ> *p#Y{@5;d. (T0"3L%t:rTYmh'3ݞ ^Ē$$*,S@'!9#7iv֝ MCB}1|KstWBSt_Xal\AƮp;!Yĭ* @X։YPd1+K⭜DA8,4Db,;wU\>ؽ&LљeSUkTuGuPpT}]VWYLs(U $dل6ohW?.Qfםz7Wn%⸋nn#Ş-?vbԎcN.1т"f TU9L }J%wx&ʧϠ'SuvOaX%>LFi; 0\yΰ: kI6y9;+;â{;M1h88T F B!$qo;3,>!Ĵ?ZZnh7:YrdɈ2J"<0!tDvB Sчw3[,t :*A!KG]RkIMB @vlDTlj}q}º'Xh: VJ腇 8hKҼuN=hI>ݮk3#A ļ@& TY-O(,P2:-~X dExG@E)v&4Ye sRN 3W #2q] 7RpW{S1Ͱ]~w(JN~1mKtY >6ɣRNT'‡Rh#b{M3o652.)rS/`dN -qZ.|{,ѫ! %zYj0K˻k?=1Ts+FhRUjuPiцWVҸ#Lոx1џ΋OW ْu}Yxl휁Z{$c+ZܬlI-jV7#f'ĘFGQ,+Ѭyr)U`[mrUUk{ybXOM_&ՈW%j0/TY iOso4N9Ʒkl=%C)]?VIEc4SFUTL^Q5Կ߮B┄y_Rw|Oy{_߽y^.SpE^߫ с[M͛FU^۴0r-'~w.yE.kg8"FU䗛G8긖#D_ēne$#,ʐ@q"BD=\,%BU.i%ʹ]%iϓ*Lv'1Jsʜ%Ap6C"R8Y|[L{/ޯӉ_gt6'"Y 5z&oy;Va,;#K6#,XKBQ`HH֖Ns,jdyUy7:銖E*BU)՟UݟhѨ_\ࢽUzVq&N~sUT.*^Acjƃ:~R::8iO)T-ݽjTUO>q4.{.몊Ym"=hZў ,K2 BBI4MnU}:*Y ˆpVDQ1AP>M-7({]x pV ~~HϘΪF+q+RΥ__G:\: \* C }GlbʞT (X8= 5h=֜˜c7IIQZꜜk?2N+ K~`CVLj`vNn"%Iy.UjfQHsn-MN!(B4WoQ=Ѫe3kt漠 &ISVSHc]/ryMN4-*XzkI\TBZg0ea+R_Ӫ{͏ftt7hDn,֣v1A. 2u,nvSX4[4m/2!J 2ON;-i˕-U*Fc KHA r\i-~H $.I9fD 50eU3дg.,k{;ޖE'QLĜB7م"7%t!YĿ,2UHYu{^?{WVdJ`ndYy`yX`v,Ev[N`e[LYsX{y))Vv5&1Z("VBPTB bCVl)WX* Rml\K`kpI' 0HS .ʹ^)$MeOV*naFDlC,dS)f@Pydq`/ı bْSj3gX=HFjc$raǃ٭SB?f)BQ: y)ϐzMr11lUQ{1 , v}mHHɫo<T>reBڵR?yo>m9>\W}WLLO/+6[ߗnn>^#_if夯H-]PwL-YF)]JGJՓR'R*vUM~);|?G@JكWX" Ln9`JyR/RwJ"{r^巹vGjlUI,,"d09UJ$f[ULf236l5~ 37W]zl ,_GvQisBQuBU,Ip!qBf642t"M\c昣1GsS6p'c29c)wۑiiQ;oޝi}|,V5=xQ+NG+d 3I;1Mt8ҭ1y 1N{@1N{1NNܥDrj]VZ}l%lkg$ȭg4b}%VGuULDS}Vy/[wx)+N<~iн}pw:+Π>uE2xYjb$ܐl`˵8ȹ&ylU"T#)o,>Cҳ"wV|L%Z1RN:Tod֝ȸ {͌-Tg, # ozԞ[ۧ\bp|onj!`ᙜ.9b= (E_LY[)"gBzEЫ;;NPPu_=`V^&u ZŋG`mgk^Q5<0gWInOB(0CE3E3xvBK+-bjS](`wf<;׺B ͏mtFD3"∈kȶB51 >hc`8Nf=*l=8]lT9]1x#߸X# gRN챈iœ6HI#yΈح;O/luH{fɶqF\q!k{xӡELk~.9n_zݯ6gy}{JC ЎV2Crm5&dCl̞}̰:oG)pwKh ܖaMѽ}ھ/1;jQsjEv pjuH 8)GUQT@XmȡSyS]Ag;]MVI~g7̱1٩ϫk;I+>}~(RaToOTpC 7~Ƈ;pt s??I;{?z`fB )U+ic)ZP@C   [ߔYd+-v cS "Ԯǿ=me7h| P_yC/HRhEso Z/2t9vɣfGz}:7fo~1|r_ys_+8lh3 mԤ"rZwU&&2NӔӇk#jO']8D= DdW&StZU|?  S0`Nz܉$/Mn)sm|ƢUv띳/'s ~d[ Ҧo g; Y:/1D92fFA 2*7"N4a=CMoG}UFAf}bGQRwסhAtMXqHs+B2-6F#v4%mZV#rҲݺ!/=n!@~~ 'J˭m!-_ɧkj}L׊wNvK=@K씿wD&&$P,&Kޥ\Rh# [D%\: aAUF t<~aA^IEurRy6=doD\3isl|̓i{΄8MOK#u8VE(: ^9?`0$0qpZWA7Rf+ݽ$GjlU d,db5TX)`rqgikMŌ}`y)uy~rηl}__'̗6,:jS(6I`6b׾&OHZX/P))Gb yHwO3?~i0ج4xޓ\oGlOޙU׾٧"8s:AY/6&`&q!8 mF'ZnDlBQk'%u lrˠZ|ړCb# pzoZD@AЀ)V@"%"S* ?,eAI|MQhqtH#ϓ!Xi="bET#%!xcc;AD2Ҳs3H&%OvaIfW"kPFbj9F+D[)bg9A5F>r%@R1PyZ~k0-93T^"[%oU;] T۬t~tWNuu9F{ FEe)E3rMYFqj4կ>>U靪LK-)-Sؗڙvݪ{ݔtL++i&^>;-vnWm{L,c05%a𨢚+&ڔ:vF6jN̯ߠ+S(5tRS[^mIB*պ D#3ɵ`n1 R3}¶91h^W'\tȑPnEB"Z=3w֨ ^Z,ؑ䀒8(q((; H5U:Q0!S&aJ;f2Rʃp-AY@8z=&Ścμ˭2D|182PjF"SY 05f<52(vu@ w>k64p\yHV hhfi%1@070d)Ag((F"޹bz,Õ|`W0YS 袋N[lB iatj| lHߝU6,ݢPTyT03ẲP_/TdKfUxr5:^(ǚr!:ŵV`ȦUK\`K? +.@ ֚fy2׋ldRL|q?꛽]0U+~N* S7wW 5#i֎ꑮ CS_9eC1x`$ɇK͘d8[;*AGmnuk,^ 롡40إ%%K1lX[WuJM9ǩ4e/] d'l}K@W'Q.q3|%=*"ݬU5J KpS/Ǚ.H>ٛ?}>=͇3L}} \&ꭙO¯O'O~CK0^34Ul5nsճ \QK^3?Dw+C<nxP[''u:m4 =4 o\OR+)0r+ ))wI.K)N&* 04ZÝDkQ'Lẅ́lFc/)/<|B Y ٿh뻲S G͇|x0\?3?A ߋpTr~![軣ˎ}frt6e;˂* Щ:>D@;~YMaXpn{=i:':3$_Oxau6 i}u:xJrB?"~B nSX6[6mۥN-L|' ˥*> K$!k.ltLe8]Pv@&0飏R*J&/A\{ RD%3kAVFHf( "x[\#2\*밊%o"2%QR>B6x#_2tgނkt&9fp-m$7^r# $g\Kr8ľtu`Trgj)1٪(yg[v+`k Ob+t@HmojRz\Tknra*W9LNoӁ `Ϯ/BM]}–zYv۰.uG_oe1.W>ސ$M%׃˧B-) FoeS(J;H'H1%dgFIlenK' ~d"'Jќdsk,Q4ɞ -z)q$ڪ\JP|䞳@Rʩ@yGqjqT[B>ҭvIp3GN%٥FzՃX!B>>aL'AEaV:S$z&Աvu$аkr8]ǩ?GyEǸ8& [o4$5qO[ {_: z. sV(,'Cge4fy6ˆ*9vi}Qq^UqV''.fcfiV9e͹N`wY9v٣~?eoQ|]Ou4~&5yL_OF`RGಠ3y*0dw1%w'ۓ]koyr8?eS<`s-8 '1bePO>Bpͷl$نuFeZyzN:Խ>׮sI-Wܟmgٓ1F"Y *xZ|VҶb;Aa,4p ʡQ[X: ZIɣҾNz꒕.FyC#ƁɈ+eD\s()ӎGM]Utڨ9rA G$.ZlͶ_Ck:nUË"\+% 7V~"is#SBPG%X<#b<ymIC41E 62N9 PrD`L? i;iH[ӹW@p׽1l~id\ST,Y*;?ydsG`({݋M,9XMLryå O=cF@FXp4Ίߎ{$Wc2wv\Kh4Cq̶miޞZhx^JΈgy{G8ˉ`,g̲:/sL"&728yщpg"I[췣o#wj9 F% 8p X䀙 46p")S;:g.%x:$KzA߮FMf\:Wnוzd%W6}./=)?_dDQc8!j(UFk勉\siWHRqptg׸c ccVl =%DFRYK'd Bqu<k1`"{&CD ѩ`qR"!@wK9 H 1נ[!χ!9^YO)O}y Wԧ aH: bD29i Ǔ ߡrb8>2  ю[C:~XD_mhfmԃ@Qۚ]?Ahl /] gRc!Nmth%. A dV3㔆9ci,!ƗdBΔ4r-]X+?zJ6/eynNp& c6h&0Z;4+{?]mo+2"YElnddsp૭X4RʯOgF̋G5(ؖ<"UŪb=UOKy-n|bxe5 k;{_@2ߓvň5 p6gCT_mhip,&lik #W//+Jk.P 5jm` `_?. A*goFBQ0gG %錐Y&vg,sF/@<=xz|fOb %DΒNҞ]ag _~564хf9}55x)iWEr%A@T:eb|SQknz5yk+k"A9-!CpR,kCrX$$} ށw&X ~vu`#Ké)`<Z6.l`C0Q D̪| ),P" bs@֑ZcoiY(S^E,76^fl|Vl>/V+xu(Nm3>שu=={>Jgo,xkz74:ܳ%k ޿jВ$X !Xt61?HMl-aIQ<FZD),)ZC$'}O2Ƀ$?-WQ%i?Z.Wx֔E,jʒG"[m2*Krʵ3*̫:0e/mðzOjK MnJ NlB %u-VZ41XUYH-, "m}2/W /׌'>߽gL<ҽ|`y|A?XW3uVwwN |NCAG,`̻&k].-2e Ru,c:ȮwFFN>>qk,@B4* $2 S R9@Nj ̐{ 2 ۰BFKVPu,d3Ab ١jtH%VMkj*_l U$ص h}gɯ_2&~>պ -hح֑:Gcj:bKUZSxuH0y ]pIE'{Aԡśśu77'kWO.X[YG6d,x YB& \2`0ͫ0ZBZ昬7V,eڱ N)T`t޲;olیе1R|Znx9.LzSyWyi.:;qy1S=~)W%Fx=NO> YAeHDdئ/CQTX0T|ڪ,oOL@>З-1Y3얡CJrCFfgCᣯ֙hvdXV^~w6ޭռrzڮiy|s#·F^n -nox~׷s1ϧgӕOik\>]0]:IWܸ~ɢs3No>l;f{j|7FMtERjA'ZҔ)Н'ra7g+Nj{v3 +A7R!] 㻜T콷-Y|M I袤lAIՃzmR׾,;[VxAWv?[y;Y+7s+x^-/m}7Sgk̼jt{yUځ܁܁;pKn.w9\\\\~R.wr.wr.wi\\\lC \9܁܁܁܁܁܁lNwr.wr.wr.wr.j4)jn. % 63KHyCHFk-&)e fnFߓtRI-/\R $ʨ y4PQ)Y%IѱBat@0ed '"RQR"F^8 \ԁ=X lg2y_ ,ɉ[g8"gǣkmzHł"޽ZZn/y[ˮa)Z T) 2,aL"BڧZZ7Y୘'_u`'365t/o+)*Y!5B`ggJ,L[lT @mV <>WGnV'g''%u'g]LC΄  dT5kHt[R]gga4vnR[.NZ[`ANe ;UN'3S:;ouHcA' XO!^I 0Q: |+cv wJ4fp'gHlKNI_&KwbrW`rK>ϬI&-OC 죏?_loWƘKaK#yG!@_&EC1iiV>3'H;5(#:(ޡlO삕Q>q;YՕX G!2,l0q$7]ǷctmQ,VWw-57QGt`,JDdvw\УRG;{{BSjLn=^a?ϯ8^7İIqC̩xZN߽-] 0M|{?^ܓ<bLNڙyxuӨhvUf1٨x*֝lzr\͜ǧ赳*yCuݳZV^ެFRFĆg}9cU&5J<Р7fq8L~i,WK !]24I_>DiJ jSwI<仿??_~oo۟\4;_=C׬̪,;܄Yŀ|('$]ADi.!AF%iy)Gv2^!Oؙ 8{"yd䭕Gt[&tR䃓@ĦF_%Ud=:@}3֨Ddc9-}xl "ZI!I 0Q9/$ -=}赶}kd'p:pWO%os&C|=Mu?~;M޵6rcRK'M cl6lc>6^mHݝ=,dIrBnEEX,㹍l7+w,(_ 14wQ_Rw }nW^R1V\u O!^qN M1U/c^>fk!.{^y5gLݢ?;cE[ϗN=M;n}"JTT cY.ȘdrOkV2$e@qR$TTL,5N&#%G̠* uIӱJ|@6Nn#%БٟX._jfVR1sk7[E!??h_?|,+U?Qjr/f G]gR&|,@e|@ i& +t!?y/9r j -K}jfQ:UUdX'tpUV2H{n6fSX+vd}fkpZ'FYNȿ 2D hwQ,WiʘS"МmX+$ q|&/WAicRomedlnbmxdIp ښZC S '/%tUwcZ`E.9 s JP\Xdy(q+ڄVK `] fE  躸U[\pmdчh:iY"wc~"{a{GU#^A٬PCfzk5/jZ'mHJ# KXm0Ip@]j"uɣX4"P5Pj&NNd[l.7sLgɉI@ d xR;IR N2AȚMIO }\D 3!fr̶&rZGKSYvƆ u2If%*VdV=jLU crWxy1O|,-ĥ&KL*@P,Q`:9$%PzAHb0 ҆Vx~e?dÑq Cyc:KmooD=47u1qJl?">nW{Z.m*s׽@|ƟV=]ןޟ1*kYڱK.}TVg}b㓬{OH|Jp)dGҚ@8F, !XDb|Dh6%Eٿ_1ne?tƋY&rחOi9}fgA1S;F%;^:TIiVz\s+L34;1]6Odj6OpV3u8.UrJdU qjz .f Hs3/jޞOYyyLB^ #>x/-~<u `Qs*Q!5~"zq5lLxĹO>x28ʠbuZZ[TjœPqO&L$'FC Å&xo (K :FQ۹Ь1A$!Jʘ1[xZ#sNLIE 5ͪfVЧįaw~֔4G9m  .*.%!\:Wes9!sCMș/oY"RXFJ SН&(!=> j R=.zGM( E_2*+=DbMgrp Y@U֬Yq xkH<"qAL&ʌdh5DM4 @Ζg}y7rJS?n:96é g eaZoqg%Ej:㒓2RF$YHeLBZ洏)kmfS:MKc=Qˍ F9KET--w=w5a+Od6\wW&#VpEJ–<RS&&2-S=2 f f\bږ M2\tFyKyw{Y2䖸YJla㦥1o!#fog q:攁ĔbpZ! ܌cd0 I)I0ukjĭiQ I4Jq#}r+nCP }!gr lZ HGP\QYZUhIL"%nb!cC*]ϦE@_Gڨ iEjJI6&ZS XͶEZAZ,okڷiT!pRQQXQ{I.Qe5MCG\/$ #FPbE.TH'DI1}7֐EJ{Z:prNo}|Ry{r:|W彫5&dMj{%H7)B<ߏ:ݪRAH_ m/p^|I$E/>^)EO'@-:}.UP|5yaMe&ґP3i7je1&c+kL~%urߌx?~S{],{q't!JB\ n6ёmU/h _Ű,؟poC!m^gNCA:QaI{f>I' `m.<ជ{uze LPjGl ۄ턷1 D.2/kn4 .^v3p$ ]7Te\@J kl_zHRNI>"I#D'6e5:k@ (ZLD'$rPIFj[FXlyN,UD(оUz(}*O*{Uck٫竪UUA\x '<^ J9y|Oux.M5JƁE[QqH߰ϡ.Fլp-Kn(-]rdmTHm`׷ 쪴Z='yb-Ebh18No3HC0*Q7?&1I$׭l}FrIV+_Zej,yΙ3.7E(It0yu̓\OGJGyiq#P@ {L BAǐR1K %BB*&rmٌjMCa(.f],Z֟R*ʘ1ݐ ֢4AB?^T8M4_͗y=~ؠ5j$!1݌} :y< 7)-gZ܋jl>QzВ$B "l:"Jbtl;όT,Yo'T9RSNRjTkLq7Jy_J3 B}ɉ7u_pdWtuqyiO낷/?ՀO&oKsdMdm%E cBG+$ޤtm` ?@U "{ S\ឰ(tNLHn&~fӴ\nzmk{޽gANZV hrD`ڤTjIbMM6,ŘCzbdDJ(:B&5)hTV6>cqPɷLpoH1GlzD#GܙP%D4uY'FEcɁ6<'`d^$H7J! !Y#HW 8T~&h$A1r$X7\g~ՁbC7?fZr_~_~q[680cB  4@dGcȨm~~\a38=9Tw7C5 rEX`u:;! 鐝yr:Ƙ4U}/Y}OXT2%’$1RgP^EaDFN.HFhK:9R";;;u~c7v~c7v~c7v~c7v~c7v=b7v~c7v~cqk@v`?:ُ>^w\WYn6|F]p y)̑vi>h-8D AJ-CxNX0%g<^qj#=NFرGq3S$֚pL hH.֙m*@2FA)gm=ŠmH\+ . 3bvCȖc/kXZ(}{7T((: O_8*V P(J%Hg]Ha.XVe'm!`Qpdp {Y dB?\2&Hc&9$|uF+q#露N,;n*lm])2,M\ kj1JQlL*(ڻ8vtN般d舗l7hMZH V4Ws%钱~ Y2T(tpCQy@v>!OiF똞4-ֳl9k2 ;!)M"uͶ4%Vϣ] >IO'iC*.~\k^R^n|xi~Kǟ}/2W_ӑ=*#BЮm_{Ksq(6&=icES$Y7[i+U֏mwсPoV9{~yΙT(jA!&dcHAeSGt䒲Uʍԁd矯~@TmђTYK^. vU]Xv_:L!kkU*~X|t d:/܄fn4zfr88d14m"?!URPvfB#H)Υ)}${xk5;wp %_mJ@)*N*}2X%0!j.K-A4u )!"L,CtfeWlwAR3{+Uv.M?Nnnmɵ-ԫk/;+%~X8ğ[-n_BnXA ]~Z~#w>ɵɊ$To>?N\ʢZPK~<Z}=5m{CQncٺK"DFN*BNrMS) 6p&gJ{}𢴸{1"V"(y=}/YiOGZ͖ߖӛ/h2,\Xu U!:&gM{|p'RM%~BSw R5#i2Fȍ5 P np 9٫(:jxFȱW#FtXJ>("x*SMe1qM8f004h m{ %B9&ASڅl)E*4@"76g?>C~|d/ z^Sy2G̾X|E]`3tͩ{0!{ff<%ˣQ40V:SPAȺ:𬝏y6J&E]X\ CXA/% !Mމldo'|ro 1K=$ =Vi``cgtW+j;fe],H5ҾB;Vd*0/CQ%k)T|hTd>~H9. t9B>P*Z )9*[hq(SO fXy󟷳1*:>9+&bFR]?`=铫e>^Ը잩׫SnsbR7e;E[ 95ݝ?T;xXZG ӊt}Z>iw=g3cpԇlcik?@grX`CL|6ne]:TssUO}45.ߎzl>TN]5͗p ܟwMtERjz'DJ1(HW6Mɓ%K g냴E`4eB>" ҀǣT7 ]5hAfEi3E:yWQ5}= 9(fҢ`ϩV9.@v~[uy7XM5V\^Ea W n01Y5B1^%^O@oFU112+0 6e4%e@4@VyYy~@֞gd_@gTB$"Fɛ5UڻZ5X[y¶͗?? оt>^>tNS+ )ycJ$R@F$SA)T5TȖ q}qXߠ't?u*v-=v}UI OŃܼ¿9=j gj{H_ho߻rfi~E_ۄW=~`&} 3~*I \R$,21Ejh3sSUmTu~DPpP\vEU#8,bd>0i_t-=m~jv(m2إŵ4W7xarFw1U@9;xq,땸l6Pm$L`J9&=E]:MܫEfx&yyW㓩uH _;<6@Rg0w>8,#H\T:ì!e<u_iu8m>PJ \qѽGFc&r PM=Kj(TBB 0ԝ*ADkDԌ6`@9ϥaRkjcƉYc\F"q{si @h:@SY3L6e"lNrE{νrި/&A & PD2i†rVF?˕[1v>񃩿faƛv iYin4*, `4%LT\f BڊWbGUW~6a-K1r6!] σe{\57k Q]ϲWoWSmMTJEF3q͕3D!Tj$U Ш90U"HÂI&,ujrb7A"2r*ĀqE0& ކn,zn ~~Y9_:/F]ta\N^ʦl 8âw3~r48?曪I1W0(s0R\_>7BI=N#F(ɂ6F} v&@ƙ|^0 BK1ko$Ad(Ϭ}1d ~VlN2~msPؽ<ݡ(o Enh?+8,(RO?jj9GzŠyAUܭM9_̵B[G =u;jn+j˽eUcaGk6^!bիʓ>>cu >[{K f0q-TQQSgc"cS0GeI8!IPQ1996*Z'*MI3ήsQҧtN#%_\>nͬBfFvko#B(Ӈ> ҪӋajr/漠 &qW4 L*g`OHp&O3aXK"=<%kFP!olYsuu?1jMUfU0֓fxUV2p1¢i{,˄D*7N`8p*X.TL 9)\*RBjMͅT8mH x)$BDbDBEL<3\̂ʦ\$oK`H|$Vj<5B7L$J&G@ rBljβSrcjSƼ \bT,jΉ^';nR{!"&VVƶHq&/yYK¬6\n֑TLRp٥!ʔ"Z^:lhA, -z7Q6-I\{KT bA >ԒS4ـHS$sJsnԉJ鉦OҸOB %f7=u|rR(\jʻ$2K`Ih)I:Qzmu4#cM#iF9˗-OWVGwk(Yo[~L2'^jN36 XǝL>NgGqUO'+~AOcP!i/EɐB8Q:tI3,8 9v=GWq8?"9>?:(VYr;;; ƻMDYUgNiI"a,irQxaP#M@(S(-B)ص)<+^埇 vƬpg }h8`^ZvA:yyy]O2"9W$.oKb,Ai8`|14ϴ҄Pcù .imE|sZQVԗ ՞bK584gW,\#Nu|\y|E0ɠȉrVƒ*1YJl>BΗ!=D>2rFs|DBL)g1$)KH%ɐFpբ"X7}qeΠ<kܳ|l3A`ڨg8:d25-|>{&*݄!W'R J $LSkD Z88V8uPH9HP,OF%NJiMHz Xe (7jc9rB&ƅh`*QY5bx:g.)j4eDzD$Ap$$R<ǣd\YK.Hœ2R-^|! (5W򙘥 yr g Y#\[WVaۺTZËy3N AfTH p;*G9/b\D* YSMl'#"W&Π?vjF~sG[_Jp xxZ0cyR9"x@8G s[hZ(!2nCA1$`a9gL' 9KW:rÈ|B2ms*4Sr`H̀Wݧ<@$O|[Z;ۛfX1"Z;x,/V(U0xCqJ8b33?mF0ii_NaYBp5`}sەovYVuxtqDo=rSj!^8lnL)KGZ=~gr()(U#cݼ{ %^70޶xwI8&mED8L@5OU c^k0Auք릺5:f=s~>p+]#9 vArIm'u[Ϟ jGGov aΘ$ӲAS$uj>x$n_x4uWt D 2Dܦ8-x\k;R\&}AXj) $r-E 6ɓQ!<̛B!d6Y0f(@TGŃL-0J ż$0Y Ũ`(qI;1"L/a2nzk'=e\ 7>s.c"u.D;ߨ£U; t Ӎ㗹q,0o_Ly6jل\Nfz-ϻ”ƸƚpKPZ6]'} uowpr#Ymm{lylA/ר}rE(W`XyT2iDfS]/)'yD9\3 %wX!|d x`iK ^Ȍ ^cL&U3g?g,ㅥ0c$/l; IɁX$͸  q8Փ;*刐2F$Ŭ7N))h86Iz|``~٫n]zi]2~˷rIc X_y :dD8G% M2D1wN(%q( ̐RE0b8x2DI~Fs٧u2lECsj>˵9F1Tc,#gBZF:\Բ4Yjmw4 07oBuQ5rqI&tgI];Sea_m* hk45AR鄡6ӏ[=8$滏`80fL{;S>Unp05sH)ku ORbePuwֶd٦-zYj-c1K^\v6x7L}?4d )9eDsQ>WKzڨ![%GUΪs@buRV &ʻXQB`g!Sψ`4{ T+%.Ϙ/,~֜׃y=?$i`XcV7~ՑR i~mxQWjߢ0&j.T)G@K-$&o&]>iDzB6pϛ&QAnJS1 ٿ2~6lP_TskpɯUF;|~ʒ;L+71t++AVoqecwoA-kn>M6|KfxPGͳ1T9W(l"Õ/A08ƑҬ9]T]n̟-/h]gaA͕eNF]*Rix\1k`\S-a=YϨ'Pu \S% )E*ӈX*N1'!2*-u1Q PYʖj1F: lp6HF7g)Oc.*5X@UtR hz!S&sn|isZgM׽ϙ%WX\o<ڥ@^xLDD»]Г@%Uzif?nڙYeC ];4)2`PVYՕ\`Á6Hq^o.*Z7s^-in&XlzZDKW00%%z1'5LQ1j#իE2:>2ۺ6W#U̙۝(ޫWY,=&-_6hPto_8mkH,Y ]]Vn OS@unK*!6IYC(Q6IYT00RŒ.yw.VM/&` ;/Hd'DSnD$ʺ(IU\҇?2E!ư[GJR&~򒇊]|DleΠ#sۨ rJ +瓻*$pJQhka 9"L숊0S"LhvQ# R8I UKQq,ҞS.mRHZ鄒1& e5BIBjR+-g!)fו.Z_  VGPtl>2yOstj)mޛUؚlnd)H1?!̐bT'JBq3Ѡi2a& GARPMDJ+F6 \w09Qn1ʭ|/OtvVwPRU,̼24bh4XFDFfDY*JXjADȀrl %DŽOh"f jd `cv9&,Tk0[(9кNV1U˭';d'm+ZόQI$''0膂2!ܹR,é=|"8CN6ZqTvoFޯԈYAnj817HAEjFBÆa`sAxurfLJhzCc#!xHEocdVQJ}1)Rsq Q2z{u"QcX CR)Lkt,`uf4 yHϊDNH8]fy,N+7mnp^ykrSkƉ-UJ<]n<]H.TqcN(<=_0f<,1h 50KB֫Gl$lJI|4>0A0[n.LG eZTIDk1 @3G%RԠZ[ mڦGTK/w~~yT{n5r'h˹{\_:^- Y ʌ0yit݌==llg͘N򝻙'\juwCﳙvCot%v!8?a]Q{&&\gm gqYs寇5nw^UDek62EΖ6!\")cX6jTY|:gL$+b ^5WC5$jZ҉!HewCjOw򝀪&hZљ 8a ߴ)~fx( xt.V*(e뙱1$fL OăA& Q%a U[P5tw > ?#,2gly3z.G1b J8Ղ.o=Am $M_T DHF*]j+@+ɕP`m+ݳz'&ƕb9R4j$ ſ eTVzXcӒKBevSPN 6MGڅ 0Nej$FYDlm"eZw4r֗.Ou9;๹C޿7y8 mA_!F0H}Vs2㒓2$ 29N:@HtZxb{UL$@IYyQ),Eqg4Yַ"[BG V8e$|]n?tAbL(8t} BRex23bg%>Ҳ 'C,8uBkb9Ӏ ʃ",d6gB1G9ЏK{,aUi+[r9@1)"je~EYC0.y񵩡9S\_ ?.&McTlڵ|4l!0BdTIƀ};s׿_<ߏ}SN5?$]a#P!_\]C5u7\ImNO*E]lv~Ca*^OZLg gmzZ*0!(er\[]놦0⟗WG's(U1M.&gZ~yWxuyq6=z Byg^ӳ23 .ˋV%zcK\o|S3ts3\llfq|@A :j0{wAտn&z}(26i {~q2v\rU~}ݺnR(׶lyVW |R$7.k&7 UJdjr ǟ?yӻ7N~ɫ7NûWsRޅAЁ~7i6樧~{HC?V5Tq? ш!6E7;MF&'$̈́f$p$BZ01HHҤ"QiD'NRxc岃)Î¥h ܹ$ {n5<%kPI4"I-j xq"c٩BYIxw}QA 5 ݹ!tGo/tGqJvBwf業 [ qD*I$&)K͙GD<< V0τ_`XVJP[7O D1Eia8nrDI!FRРBˠ3+\+KkAcP6i@^kn;/rAggE\6Ƽ)/pw(]D/|wa9b<.jvf* c?Wq6cpǽ{+az]ܓb9D_4tw8†Wx(q07/O~_szoK_o?b+37ѭ(?x.򷟧A_;XP|փ>Tќ+N{(lF7FxW](>^B\J#ՠ*OG8t/j/7Qbz/_*v :zϻ?0E 2ΩƖ=+wTRcj +us(#@ ksO$(uT'NǺgDɻHI&W<ffVRQWvkO#B(>odNVg1>WM^łtD>m )co7#g3r-f4 !9V j(-K}僻g8iTeVM5ó#Mfk>UŲi{,} j;w'@8i%SDX.Tt 9J.ITB@nFԙV Zt#)ƣ UH!)%&Ҡ$%Ζg˂ZIT=6: Gb΃Ngo *'Q ( AVf; UP[e;!IXzwfn5hk?\FOH\y(gO+➟4MV%b4췑%;W&i Q^#p̉}sկƶ웓XS'[խLS 9݊E wAb" 8O$с}2@Hm-9Āu{wifOle? Ţ? ?Wg*.?^[/o$zozgS56so>!I'ipo7I8< `j6MƭTOn R-=Gϣl^L3٥&R@Pal@@aOp'I IoԉM-foҸϔ;~K<^%.5] eR1D6hic I y]vwvIv`Df~/^x9kep|+YBKLg #7+FNWbCXwo:j9Tc.Vpz^ؼE9[ 9rT'+^$!$]u~8/_#u<#^hcCCq=lEqiJ@ {#K5+%tђ#ρ2xdRI!F1G Si| yJBcIx"q>i^o!uqH75s U{< l5|\ vi,Q*28`h/,3'2}8D@ ܦ)C@ ip.]虅#,nz"֟Ls[iqs^64}l!R2XjO4"<`8@?y [?^ p !šDB)RH`GJ hdIREcR=Mk3pp>\Wٻ8n$W{H[$Hd p.no,*Jfd;+cI-FTԲ;ciîfW=*,W?T)|OnW ~W?W2~OǏ^{Yy)6,/2zWT_cV|@ " a" d2BE<>5"D9z ;t8GWbĜپi'$߷L^2xw՗ޚ~^WSW~ß|IEHaB3ЃրŇ5kZxWWq4.<n{ߧ!Ћ#JIIJ)J gu7V1R%bs,TMx˷iA虐kCV0G,hBddvrLH lb$rYQ)64NyykGgF/!Nt])e FQLE+Y87@Io/U=Th8Te>~y`]H_jUnN2˯pkA__Ĵj$6f\sN4"寳].`ϗfAFf?jnvFy^\v=0CdJ`2[/_%~nyGz~~%ruXprqvO7'y|$ȋTmvLQBχkhU& g<+}^=z6{O^[߉lRѱS)-ce`$OԎ% ! /6Nӧ Dߎ򰯴ߪvilN,umWxe~NvL~1\?1M#(K5C]hk퍾vX3cIw pg]Jv.]Xٛ~ǤÄw;cݙ6u?E5Q;n37x0uW̕rdrOOx4Cp"$>w ޸,H>h"YQlo},6Bfh^+f"⫊it/4_/<63%)eB:ےDؒ3ɪ;<Vh 'kzc9#o0.fX2V|]qNl;cCpX]3j& )y2tؾs|i{ztVt[jz_$ӡ %Nl@Lf1zGQ(ZڧPXRxD烕3Q(/J0R.1[႔TjQ ͵[@J)D̲Ƽ%g/;Nی1qc{mcqmX^,M1P=s1sf!!-& jFa&HP\̺Z֟R*ʘ1XPS`ɏf/t7Md4y_b6cTJ=2uէ|JaV$+S{[kCo^))MQ fhɆU,Zm0͕eΆc)ՙ@>)Z]("![#ZcR1eH!T9Q)mN RjA526#~dlVq$n[B7aG\@\ss?M] O7/f@ϗ˯K=ΑUo"5&1x 6:8TTNAbMJg6'(P= `ld0շ-l rl;ÖpRȹrḺv38e[>|BoEu2d fb,@ߠ,$ [jBV0,15CeʼnDY3VtlkRѨ(VRѬcQoݾ9ao,*0DZ#qB;ơB8rRiT0y 19ІdFfErꦈH a,m5tXYU*΄"afy2ȖB JFcDlFmW׻1:qɱq&\p.'\ʍl# oa3ѣ`Pz5n;Xv*Y6dԔ&\| \ 6㎱jI$z{q ;aMяV9$HЅlhѤuE!jdWA{g(|@^Oߓ%0iTm$SGAE_t.h-Ŀ]Ȧ̺ HV/E6 OaYsJhyV'y 4v%9Z+90^ M+G%/cǛ z~z-=t:;N}P2eLjɤ4lxQGBN` yC4V uŷсHFg(X2!z!$ nbHY@{JQdOPf KEJ"|ȹ᠆KKxWu'a%RN¦j=4B߂Ќ; >XV]0α&H - X ѻ)Q!Q U3d0N_6(=k}H<zî} K}v~+p/wf蓑pLgȓƶOK-R-s,F9J[rQdИs/;wu;w$-߄wͣVU0$LYE 'Z,)cdٶfsS:p:AU]-½AU d։,3BI : c4XB0`YMDGUO$zx 4Һ'y.7v/ںA)`źi}zY]`ΈsBm/kq,?(k-ɶXj_]] fe' "@$el/a/ lQO18)b,dAڔZSiE}Ҙ=m1轏0y5`0qmsPU4 eԍn]U'虴#KEs/UiMF]"Awu┱NN[Sc78IG}rQD6lm h@NK mӡ 6Q昌M 2ڱ N)r-kƀް9'NoFE.cD2/L ]-'"M66y V[3"Eˁ~𬝏T6n}< Z#(X$blY9tްf+~ge[ D6_‹*P Ɍ%a-lbwxvLhBArdSc5#@9$fx|!F G o1bw)6LAuQ5Ed+ ?@S0&D!P0KS%:pQ&@$ !A4%E׿4Fֺ&U/';i^JEBxo-sSщ̺41!(k}}hŀ& p*O`+~ KM*Xژ1er8Qg'f/J4I[;~u_h7c%7ЎZ .P TDt RR\Ajnm}p:Dx4+elToqȱҡKuc|p!ݮ~W+5 e0[.xq(x?xQW GsQ$Ie l}6߶0[vGw~/;XvٍfQyC "~׾ߛe?d.׼6;W~|l~k¬0xuL*xzlڮHX-?`f[ף^PW%ъ΁uEI#R]6(fVwUP 9ލN95/9jD (PϾ0(M"$:Ff9emn"2 njܺ!BGe"54~3v>|vխ?y'ie<˽]=uٻ6%W ;TW ?d}8>>C"}QBR[=&DcutU_Uuեupק '!ﮞ?]NGDЩ܏k<+aH Րp7ܞ^hw~]b#L]0 ūτW gh饜~5bc{%Q[J ThDhAKH3DZH-L)O%  %Ɇ <)Ý$)O‚AHEjxb҇UL$@bT(.Rc&z0Rkl8#5ԉ\S☠Eo7e]~G _ar^1.S|nD/?+ĥ&KL*&PKؠ~9u2#üVA o#;Eo$3}-VHpJqwwI (5}o$9֎eBghHfdlZ>DZUN5@=MDJKhd ҥ\y-= vܱ p| > jq*E&@bQ#Jr[ku9-$Z&nH\V9&חSܿOkkyZ;OS_thч`0'(3.9)#PpJ  qɔNЌaz7#D0y?n7ύO}8R(e2oipWD.d weeOT=c#7þLwEqӯT^bdm7U_X6yُ5n=&۝ŵ77xs/Zbm)/wͲ&q09ƥ`,$CSK:E8 828ju^|+djRv>a ж(x^YF, V21J:6 8kx[Q$>AI[%%SpQ Hu'x E` 5FKT(6FNgNϝV;D!RanoQ=`HM'nj8;r6= (䛭378?ZNEu d!9 =%ZQ0F#!Wk3ފHp+:~1׬lu!yj&JFP$ϕFK|kfF+cs:J&(.vlTj~$խVxnai#Hp fcAgRҰ Sppܬ y5i@d AqhlKrP!v,a^mbfK:,} UgVq1R5I]~, kaT5M R ?+E2Cvw>/w@#F\k9W1`cGxz:1*Nrz%O8/lNHly*Pl6;ôBud2~*mH3״Jz5k4ÄTqm]eGSB٣ 7W\PZ9\K4ܞkL7oٚ ~S{~1[fs` f\Y\lr{'zcO\jS7ts7\lfqCA ?t`żOD}vm'^)kzR҅7ʆXHXy0jv܌P"[O`by_(cF+3S5Yݑϡ[E-8d)f"ԿR WQMӮyTlyT+_G' ߽@!9yw~z?ǟ}?7޽|/qlgy]+hkiC|?ܯ#o~ UsBB4b膪ppKre4E<4 o'i&4#H :čABD DE$&ىΣO68ʥ125.E#U%sC)Y`MIJX. 5D:LiįsT[_Ie};/C 5 y Շ ݙYC[o(MR$K.Sb4g53qr` bG̳X`XVJP[7O D1Eia8nrDI!FRРBà43rR(WBCkAcP6i@^kSѯn*rSgGE?6нyS(J_]efF|;u_p\.0f~sE^xRr~Xh仓NWyGrrRŏ5ݽ†7-B]Nq:^M '?:NX Mz#2s~kb'N-Q3\r'}8+_饢Vwo0!nNe e1weonR8{<-DS1"D1J Nu^q6}r\4Ԯ{S(sVT ctEJ%=FmY뜓@!*HX뜜^"%GԠ*/|t 8#zR1)HI}cW&ӒH%1$ <8D$%/ i5R-nU Q[e#m}r+"]9xqQsrOW=+$~l@J>5 yMD3'2}8D@ Rl rZu.o\RZӃˮ7`'w7|aͶEZτ0J4_~ 鱄ii+W KGU!ҰrýU`5oa>8+}LsGQ8Wt)DJ@BkFx*jcR/R@f-G墹tzo-k -*7=;<0H QOQ1Ƃ0!q-%5)DA &%UHp5{"؀HHƬHc,ȸKu =)ZP9-BTRŤ6#NPx&"N2q>W@"蝆|32MS"6R>k3]gڴ=zuB99tЉ8)(Iu.$#TF(-džC$0S𮇓VSn )Bl u_n̰14s |3WɏgsV}TiE:K8;v>>A|slnR (4* 搜E! ia}?1R:{@@ާ=n{SǾ7uܨ[ktQvYuR&Pb $Iɦ !AXx`M 4Ӊ_fmMQt=Q҅=>tžO1W-4Cqt^ 8C`ςA|k^xHhUBM]x%FQQr.0GCJ/k)8;Rs:쓦7dbX*^nHHv.Ct4ё{}舗lB਷G|<휉$0;y(`(Ȏ$J+ʨiDrC*ֳ}h3(ir0h%3db.E iޡ.y`|hpW^Plw9Hvmu'izS!XNz ?zOYM{iG3Ԃ@,@)c'C|k"^bq*h^<8[-@aߑY=9ht45-F"IF(L`WE؅f%8g.|0,> Zc O,;lteq?f\5XڮmF/ޏ{~oys䌠\_# Yh)%G`sҩ)C./xp cxP"JP笕E-1 >&lP`$cE۔3 e|>#je8&((1JJUڰZ7g1;g1~s$ 6luU-wS}D|[`ˠ7g$/Ǚ/XA J6;Ԯ"2 Ȗ!* J٨dMـO3z׳ yQ@"j2J&PTr,J)9I)N zH_b\qĞ[}$O>?ǓCuG 5dVQpꃤ dJVO㴌7/C 6o"p.q@ (^jī//tYzOW/܍(]RbT`F4R&SH  A*n*D@-0:TKDS٘  N(L&}&.'f0}g.?~[تpVB]~ /0-5Ѿ?e9I]RPNW$4G ;A8*74;ɵ.$4#L./V~825륐+EߨKPa$FHޅi,A MsFÿNc2-Ta{c6cб>cuS&5vmz8򆎬(ތ@yT.Ն.6p"GYnVkw}p0{{Mp+s z~oؽ5';,9tmkYe}4ٴ>] }槺Hn&>돧xo`81Rnmh{^Ok(T}t׾Cv$" CDIJUMFkDA)S$G% t%m^$ Do儋NzASKV=Ku޺a3q<۽e9jq)b>2 @6FSDt *sDID(b)ok+1Ҳw#oJ2D6Th@hrdNLJ݆J=!1tYߧ"SRɡYoxc&W|aedtc;_=|C^.,MВ,=-DVs:Tp&rKJ.Ղ:ͭÒQTg\.Z*p:ʁ+R,`@(Z{fbvf h @\]x6b6ywyZVM 0>MK=PP%N)822fiDRIQgެ xj:FUubboa޾X8sٝX΄H{fq6k)xfqזmD^:VȺ t1qF7)8:r&R֊Ȓ5$=C=rΐ92Yt҄k2d),s1cqRMBcLOW(O#6ӏC=jpŃ( JL`=$w^* 51ee2$DMS"pnLŢ*ZGM&g Pjզ8=wď:_㧴LK//~q;.zrp#HY1XN(- tmGԚ9L//?l0xPWF |1q{ӠA緵'¥G<.]".h F"t{ܢr&m u}O5g51u['Ga <e 0h걄篥BTyVAH?Z>qOr_/AKoբ_/f #k7+eĖ1Z+k]un{=zIWo>}nV9ekb cX@b+MAu RZ.;YAʢGr~Tda isu]8l|7Jtj @&՚%} ) )ZJ!qnՁj8! 6ۦ9@R6T9[HBEFn2XԒ38Zth&«t"D~(lme\X *8uNy<5=-Ն.?5ބ۠3${_/.7xêB{0^hj뤷.)"d|HGrYi`kiz $!'6#V#δ$/"TR&Xl)#g( uIm JШ^HVd$2ArrsfSR88"tF`vL=OC0ubV>o`K;I~|=2D&^frQ#9ԚLQ#E`kdP: `MM)#~jB:)GMn[dDusJ2Ƭl)08BfV yҐ'i׌B-T"k( qlh5X,Κl"bb8)f:wvz~G#25C Gc+NJ2A+ʧEс'lɥ nv;"TuO,`395ۇPu.X D%Ko4s9+tL7c on0qk@3 9:TcP&.)L6޸I7?yҍ>x au= Ȕvݿ@N.4]h:0*wQYY`1]NNwȤ8#C*rL_bQUedaL` ͍d4!5w o'yz az uBnmQ˺J]'Y ʢض:C5uhݰx~t6杲9;vm{- xz^k].g-.v=t~//s-okڨ$7f}7?+e?u܆irs#ŲnvܼW9x9ܗRB$=D( ԹBW0$HI:m6hy$ Y'\ڀT1/BCT^ZVy* 2P}\$c&$y[dTz09&]yoǒ*`9C cux1 B!jpH~ݷzHF!tQտ:JKmv$Kmg,h\> Ybx|u=~KYX}7X YW5(O S/;``.>^.-3%0-f&ij5.c*hu@ fnGtۋo0s A94QӤ,(] AJ/$&)N `Aô`*!-G8IT(ElRW0p9m`LdHTPV7#-3^a" |ļ[4dʃ,09ٸbrsBW a j~r 4&)ٖ-q70hSE$E~rVPfI=TcmSNmsI&-&c"ТeP-US֎xhPQS ôw{J2F٪WnWA^{ZS|10Y΢P _L _`@6gyrd?ІEe-TǗ _Ւda^*HRxMFټfߚ;o&l1)\2 2% ,I [w˗--斮5C7f{`a-`(|XZٿ[[Vg궾JI&ʖ8ܰpT'՜T ʖc#hؘ7KksSS2د|.vevgB4Vö (̾`Z_?}&隩JX7]m$G?)o?9{3B~}+ `^vBg' /ݽiEjڛ54ߡiCtoWfzW!0~@`ۑ:း_:**=?o\OLh A$Ga.qc@!J"+hPCEʁ4e&C+crhN}vh"vqcg T1o v'*ʹ?^T*ތm9]UD3׿~n9OxH\ƣڔhG/u9tQήѣ{=F5B]LͳW>n{g{_ X MzGheF{y/@ŧt S@b57 j 0r1S"No=t+(O oxӠl<􃡿DeoYE07eCfo!.c8X|}LyYפ0lݏ{pA0s|Q")z?/_RIfѧwMIBƘI@PQ997 -6>hDaNPOĝuzۄ3|j.\2;:%^ݭEhO*ef0Ez?kV4wP?U;Z򂮙ȧM\[M)X}3},|,3_~ ά73 +üdrN$]+pƖ>gz]9KtYzrn(tp}ℱ)lͧnX5mo2!Grh),,CD:K%.qT@nFa532[t)UH!)%(1 H rq--F[iS yɥ>b+vd}fk\._2D tgт)ܛ 1υ'E 9۰2VHcXGbBy<&VَF 뎗<}9vtn@[sf7RH|GFsV&%y CuZ7eXp=/ƣAjh;W$i.t /(܊.~5ݯsA-a hR֛t9?ׂ)O^n8n}9:T엃 +D$Zx]%8O$ց}2 ZV^ajCl]쉘| B-]D4n.b\o@X-Z6 r-oޔ[ {QdP#=+q[g^Íٓ{hfnV\/%pL0am'v b9(jeR7^->kJ$8]h,uecFRRK "9We[-i`2oLx*idN(a$N)"D.v`>bM|f[1*.6a6g4YN-YZ#~[`-/=JT|ڒYT>@X}A\8_Rq᫧΋Yfi%,5] xR1@6hӊ+S1^b1 ҆y[a)z;$}yڛ&>g_~q' ZRsVP슦v7 ɸ?hRRn*[7jqX֞Qc`QσB15فǜcj]?&ԱsBTqxWȠ T&Bh5D,:[iFΎfFz4~^_s㫜,WsFf;S}/Ex`h$TGU9AqI %NIdr;Q2! -s F:|53/Ls_Wrz\ *Q6?656EM[ L60?EkUݧʹkmP_CEGY7c&$*ʾnUWJWNfr)KS\۱۪ jj bjΊ 8BJ;֜Y<ɻOEśep7\σQ/fm $Fgͅ-wFB-#iZGb|HmÐaofyFa4.e`ZtGf1ٺq:*AGm&6j\ K/HHy<-O _1K|qey_)70F¥Э7I ]ڂ F0HGQ#'xzYX%]U jQ8s0чޟ}wo_~xwg7{sOp{[IQp ˛߷Z⮆񚡩b[ q|.2wDwKCڢ4-W#cՑ6goד^b$E3Ǡ)ycj#ȣ7(H&|NNt\d(^OzjC#\{ \ aƠq YF#I%%2 hjޯӉ_۷5b桶-[pG}/~ywc;/p*ܙt YН шqН3Ku}G<֣9 L[UAZ1@`CSA yvJX81K⌔i ùZz,֘!oP Rv CJb #52"P!(8v)&8 uFNN>)0r+R"2%1K8 ^0h wR}GbCVN`tbMq$C!]vtu壅濾+,73ˊx:^gQ.~sߙg0IQ8*M9?-xѝeG%J9Qw']/xuN}j{i&&s|m.acxב7|Qn@zMt K[eAa`&sIV}|}y>"ct B'xc1+>㲗UUz6jM85DGì̸JPEMMDˀ'7иB}k(4+Ae[;="X1tN]Ą"p+/(H hĝ V$"S;'&cKGuL"XHg&.+|xIрCV A1+.rפ-yA(Zf( ǒ:uwG('^e̦ѰR cXD+@JO}:Te'|0fdu2'WBOIN_QZ| wMm ޫoܝ@<T"`RŽWѧ^"0p$ $rͅi?Nx&}QJ%@ kQL`[&RG;VN.őp.uXE7 \.QRB6x#_2tgށ)ܙ 1Ǹ ܩ@.H"DDq餗 %[iz; *LO/y^mm-H#CG"C9LTԽ#y)~`GwsYW3, Ƙcl:vqcgmI 9Zr;1]L_ft[ fN0.UmD,1pV;ĬW逰 OjRz\Tkn샯+jra*W9ݻo3y"+C_ t8 |]]Q6v|Cte \9sW柣ըŠh_i8}&_T5PKzQ-`j1z6Gi@*@kT^oO/uQR/D(\!r -%&8 P4=z Zoh +lvpe;3HD %XS@=R3+P b%so񁕋5k@{ɀƗO}iA>~OW3[M8=> hdP:%0}nFoĻ0"׳sVo&OW[(7f,v2)r3u Ӝ GKn"'Jќdsk,1)Y{%h5?V<>jB :U#(.,"r*g#uQVr\v'հJBHtǰ)x^ΜlJ^b;/`/^\釩Ӂ,C>}V.[<^.2nAfRGa<&zG=:\+7W/czz@@_z/̟|H뙐 ú/jpImv5NuF 7By'(,9ÞF"Fn*:+\4wh?zL;ҽ-Nz꒕.9fЈq`2bd9J(a'FHivQ]ĥ,޽d ;eRϵL AaOr c CzeGm6Fi#(cF1ZleF.5 k@ы2U`C"g3Y_OYlKN[<{ g-Z{:EX+zy4K¨ǟFRbiS#⪋6tD03fYn9VoAևc{zdurU˚탭 d ע+H_t7j9 dF% 8pC 9aBF2ͼ J ERvz>P:$Эv  v}mҴ[AO>J2KE|ޫ}eӗܢl2z:={Ɉ24 Z+q;CQ1 0 Z: zq8{i/۸ `x4'hI @5Jw%{_:!U1/",%S><%%&SH]lfTU[[);nti^4M/VF Oy{Ғ#`Ձf`%J50aϙO7g<]^^|mאcehVAs, 3<8ҁ!*pKN$O Ӿ?=ڋ{%NjztЖ{c:PʠcvV|pP}d Z#=wvya1}ɬE n? m"Akn`7 ?ȎRB<1p~ /c}b[BZ¸6U( BD ,Fzb!V̛(b|1K 턧m UIڲ<NeyV O)>f<*°\&fJ$Ia0 ,PbQ1&JGUBְ {[t|s6eIҤʘ ,춳㳛 VH$3)O?}f>\gN* ~>H/ l7cV+X–_Ow+KX D?o*T- [K|bD$d_W7O-j)x-7;3@]Z3:_6t P؏Eݿge'O:XRO¤A3 ̋ feZ1O&ݰ?[r>h]A-X-gY!XHKm(>\N5;A[QUcORRm՘T(%G$=p41#(Ԝ];ޏjZ왦V3qPTK7Fņ>oL 6(T73In4`ԔiY j A=*.=4kjznM*KJ}" +FoJ|g]q{M]*KNzidLxֈ ,5BqSEiH"Vjp@<֖NaEQEu" 5&ś]'HEA:*d*]WZ(%qR4RzR,T^u"tykUjlo2kdɬ}/rR^JN(t ^V<;RJoO곢)+Z2Bp|O?Uz*d8?|>~G-/W)3qn9ᖂO6ӣ)?kjA\-ݧ {i/!%8IqJAQBx$̩]H[jG2 eZ3pZFL&ZѼ 629p>$ 8vtoC35(p(%b Lb l ERw˨!A ]QQq!,0$S{ DUͨQO4L?LunCR;'jc-UZOp6.+.%;lžډˍR.b1w*7;*`CcP<Rj*YDΦ1甓<"T2 %1,R!|eFT!m7 `̙ Y2vU]ea]-;Y{YHE lqzXznwɮ^AnpP̾r<8*F8JFΙswq84s*DJ 9`q`0oK2_v\v[;I&qU]ŒHyXɘ2B:4j;CeY#(g FMp>p-A܉&\UԻZ/q6{xr(xnڱ}^{AA(u]PB"[SJamRO%*clԷ! p"Q C¢/&)(>J>غc$x])8HY1GzD#GL/>_M;\؆MQ^1UÌi^0ُOZYpݳ{Hxg/ "fGo%chšAG=5WWr+'oi <ֆLI@Mkq} ѹڃ܎|9G1}oa̜\Q (xjS4@%l6 ;]\nL;CUe-µej(xQ6mrT@ڦR$UFS%B Uݪn:oo_1ߚ@#BD+[t%]IY "&_}H/jmAC`څp/[Ñbz*٢ Oequ ӶwM0gG|XYg-u}v m\i6p 2q~,ѢCh*<h0o(ւ&P@%L* 0Tcf}JzJΡFwP7[<`śx-&N.V l}6@OHOn8D@%Mwk],.C\ĦkնE BIvΛכb-zps^Mj930ΛNʍfߔم/1D|6mwoE~>78~{|x"ds)~q6XY4ܹ*4^}˽~E&lLJklCZV*Nu zOw^wzɹsZt dbe@P;(p)j U*œ9@g;&Ζv6df{ώEOb3\dC|! N48Sl5AM DNJ&>?gQo᧟X[ Pf:XB*!DC1fHΉbuZݠVxwIQJ`EۭƠ`C|%gSBZ)'q88_ oF[15@UИs,΄T%(d@cʘLk.ϊ0DMn*w79Ts#@Y# D9E@BWVu2Amyt4۫&4dW0|A&~7]WNSeo`c$Z:2!l4&V;PPb:bQ|3C +T2x;5y@up9ے ݓjF]QzjV%#泖AOT,wN^5`4TTl1Q0}%Vi #O*W) $-vmvg{lo޿a˗m0^m:[,.XtWB)Iz[Fma-A|Mہ?7(G*O/ƹI<HtfW5`2⁤?I7{Kw"<*I7~HN'\Z'M{ `B1&%TXkA V)^ {` Y%LQ)$M%9hsL#dw^lQfAx~ks#c`578/< r0 o0a0u}|W:9է^vW//5 J@꫓9xjJ UL,?*AcƳqvr\,-gjYiO_{<ϟ=W[ֿD(?:5so n97~V6B89Nz5h\f/G~7/-Gg˳a4A;'!QEV(BVi& C9zK2~50oͺ6ޠK:9mFˇf=}oo(J\V#>=L.Yn*%~=Hܠ_a>6,aR/ 4qXk/g V!CZĒcqUc/( *'OM8Cfe* t&&锛j%'yfF3)-{>{4Z2pz'ZIa@#F#m碅W.9*vSP^(G$j^-6aɢ (裬-JZ&f#(*AtSNI0*Uv4 k0(]Հ"c, ].܆RnFdY@l:@G2*p2@ujh1ҕq PXdvaA%%t]h-Q<4D(2ӒD^i9`Jsۅ:6zD`AY-BX[K#F@A8K d2P?>o2hyvnn l3D]4K.,wݭ|~1p,d̍WƆlcqP4eFX4a=& ŠuԆHTNB-e\- =bohU[VQPP#,̃s6hP6:=*Z DMScY XA3^OB6hR&,:5(fT @AcAFLB̓ C%@ "eP82 o0ۅE0AL5)AGPA~EF"Z.'0kkߠ| 0,ȦmBPuDtIȨLhST L{eetОEwi|cP5JB* fBiP*S-Se-5q/8T_N" Zf@ y[JV%d`MR)'URV^AhJny2q6~tMFI^\]*( D0ue-j^3D330 7(xobPH:Z2G^Zuѣ8 ) |,[tP 3lE:%\YEDAt%p?4QB'r j@aa^)U:*DT?Bֲ'f2eK-3zk!WN" )A޲/ / ҇2Cy)"j ! Y*QqQF1WNrt ߋH4$Gd+j `NaTh!"<74r;R+3 5֠Ye֤6Gx.s&0/9hT( Z~Bg`ÎA6:^!DeM% drlCU@Vۅm(Rj4(H(-0EQ!Vz^慚n4S"p#DG qlړDB),b*q˲ %.NФ`x{@pQi4nFY6Q VVvåe4hCQ"ch ¬,G*G܄$ =tC't\kL,gR{evEBPdADN[4Tfԫ,~5i뵸a:o>3ڷר| ,NnQ8Qf۔͛ J[s ߺ1#h~+Z6kageg> jۓB!14Lg HsgAJEL d㌘@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL t.CbIq8L W *@(b}L ֔)pb@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1Z&c\Cb>&59&PV*RJKL $7t&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b+f@Faɒj=H6m@Hz@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 0n/#vo7Դ[^7/9Y$bk|4.#T3?IaiI 9r?^g:W|U[_viRiM` 8;Y^sPϦ1ת niY/e?]q:My&8Rݯu R~>wOjV+ x,xg1ejHKyǂDVٯXM50:?o 'Y.]2ᜫ< KFxMQr$NWV=In\,RgQ5\i65Nw@ȿy#$0K˜ )Gk;fXg Վj_Ch/:tY_Ƈ^SOǗgyr?&}7_/Y>"-Q][x𷝳|t_v彻*}/-sH8wQSQϯ`;ieLcnv4HM2=g_c u 3rֽL y(e^,YY2o"- f#Dk+,E֔,//B׭vol+Tl't9^(om9Skg>۠EŋQ===9Il|ݲSTkfxd.LsZgRsL6Ǵk0fAkg_nㅵ0(z8b^&ůzrx}6llj?dQ&(Md|kN ^EoCXii@o|ʕ.hvkzU :Z%Ϊ0?{_%/>UF󬒫VUU9#G'U{c.xq?ӋqR@q~_/G}.73)V߹n`Ru7LC//wv۪qwuy]O\Pr4N=ξRFU6dFp+͆q4 ;>oߩc?0x68?;cW/>[ ;f}P!zW̍B9;uz%xf6I_P֛/0Dɇ+b^_.[<>=K'8>v)٭NXn]uJ|PNw[W daf'*K(XO>/53>5u*n@c1`1[wm{t~w>G1M VRt mɲ>pK8l̶iXSN![N/hMZ6Md]OD^_so1ʡk(Zs 7,IYwE73.!r:w2ƶFkr6xGbet#ąO& l6\zziVK\֊-h8HKU60)yH9&F`4rΤH6"K`"{<ΓMu6pMۅ GXbh. σe{u@s05JN]NK 2%"J qť\G!T(4 $CↁYl#"E @0T%bC^hM,pM$""AP&8SkDj?mLserSO'iػfV\Yo.<SĜ8?F|!,] YGy鶍 >6lͽ(:So-jI"7ZOHėEtBy|q;ixZB.JZgЯJ&;+nW+)3[SӺ7okceۻMUp/aokڮ8Hr1fݍ.wBZF2TbHmða@,PCj+W1bco]:*#G]LmԶJV^4;BFѤw7 Ihy)0󏽟ᄐFg.A[ _&vQ7]lӮ*MvLU"{>p?޿O^Wwyw᧷,ڬO'' x0|Њ[ 50%g,q%%2'1q@P$N$fojv; NK7n'i.4'|d!uQJK` DI`I& s>D;; R%:ٻ#u@TK d B.HIYC 8 qӝ9י1X_C1X5QQ?p|{:2OzVu;   +N&s*FVƆT{i6!PEב+#-/?.ole߈u֙0LQtk JIs7|Qyy& CU"E&KT4@exB97C+WCH\P"qTh.9sBB!X(%NM#<*jcR/R@P3g89;x!VÚm+~#ԇnmʓr`xV<'OO({b2aB-%G"٠1 A|* .b  1kd4Jc,- =)Zr B<%9%x+zr<fuWvu6EmX+:L&e<"4%N* Mk7* Ȋi+U$!r`:k%ƭxbiŝq:|abJ18-! F& #"%R&)yaHWUHUM-tAX:ș=Ũ2vi/UPMqo^\ }~9y5}ϙւgso8#*QD$&`g q[94+a:ցwH.>jM$"7=J%'B\^k8Dȉ4MC .Zt$-)g< 5Hս b DF6V=)Ag D+d6B #8N''P +HY'E"Ƌˏ ].6]1 L7?v]0+!g%seEH. rQ%AQq:p=v=gs=G ;O #FPbE[OF(b}UkkH*3dg&廞1SjO0&::|MgC;tt&tDsu2tt4 pTj<+-Tᓼ.D,"E >u*%u_+3pF J=>XD"FLQS"{5Q'7F+Bw̝ ӵC=f$;ı>{ZϫrYUwA3A=.Qaz%<%bw~u32$TLT*烪h1o}sV_L@GSR8I [.PƝmTH2GwQ14tpA#N=Poά9^Իhq"xGt;MJ9;R$>$@GE"`e)I}1꼗%C= #Qew \:Bx4z>#)^jϜT` 0Q^:([@&TCI2jzT!w1;'Jɏ_MB ݋Ѱ;zRb QLK&2@¼)|N&Gyޠ{5:W iʻ !Aj6rM^40@%exU.I>bw fG Gܯyd{WM}Fou`d}7E8("T?{y4z Uu[ 嚷inkĞ 5MǣA]r+a6À-% sڞ{EJ]M/4y/#Ϳ~$WǗ649(EosZFq:wfGgKAzYNl*] l l ע#v:% l/0q ěN^RK5#eY0bCxLJQWm<$XOL;h4 Rϔ. /G}n颧_xIjTwC'r5fj8ZPwl6sf`3AMZ- Դ0aOg&&MycDO.=_w 0ۛ.;e]*+F}&iVnsfsH̓,+ @b1(DB4F׊$儑*ONAh$"vޖ"NeEq 8ec&Pdpr^HVj#5<k `# 9eSEwJuuQ)SO)U{ .x] lJCRw+LoKez&bO_Bь 'U-+0V.T18ڨh..isdpMu"-5:E%#YeL$^"QS.|ZkIJ451sDQMn da[/TٗGj.>;'߆ej Ф@\=KDR#{ɬw!#UXˆPNYU:*QQ${Aw!Z橏PHk T:i&?=i)r j-I>O\GwRb|rj [G"|m9[}}OX'gI[{VT[kC>)טj @PR+PNC6g 9{Sg>$xpiQ(Jp$Z)C"CƱ`c)U1H$R*&TR5c1rkrX.,GBN B“G,c fw9EE{ӯSߤ޾ Fk씱#TңR: A*w|N 0-r<&")[Јk!{60ceɥ-d훠vD p"^bK_c0qDZZwZ[Ajdj#$$gzH_!;VwW !$ "'yc}ˤBJA^DYD-lS3KUW_U9rI'lz353FgI6-'%P"{e[Fea6C-";z6kw,j"6]-jl`xg>(KhuIQY1D K=dڋDduSHN a,o5bCjgB-ĈMZ.FZVbe?[f͹"~<<Ձl?fR]v8Ż$)tT@FeްQI0(ݕL؇ڥ.>]<{L:nL=AQ@~w7I?aFѓ o~E ɜ7=tn@hTC6 ;Ϧx3ȁ79p=j Aܽ-Ldj1v'TEGKAh1'@YSȦFi t^^Y!-f L}ceY*%C"{λh̄dID"͚%_qeMnW|\[r;^\x癅6./ߏ6؛NsţJf)2" ^ *=!t  P|[6H{k&D/dM֑'p첱) hRrɹ*Bq ͚Pg}R+K.D%sŃr52jX |P.,,SO,GPf*9yMH%NxP0DO!m (QqG-T!06_Wuw}`Z ÷@o{n^dz/r7}3wl shcÒ>5؀\RwQ[Z.+qr!?Q@?FAzv"$uVACfs(Uj2C%m)!x!OY,_Lf=T^,DMtERyJ%cGra' iQ4Iߩ-tQGU% !LEvwT+{o:/fbu5wAfԑtLANBB.)DGoH+Қuު9!49ϗo?׽wT}Ukޞ 0u?JlLWޅWUesJF4n#5j PlbSGRƒhܷOUy=m!hܐtKa(f<8 ]N% چqQ昌%DK0.)dHRʪ\3\۠10 b͚E6ˋZMun5Z"ݻo߯?.z[%S)oѨ`5`d_#YT6܎Q4ξ? - ":QbF,FL-BEad+b9r:Jm(eHH,V5㕛U+.g 4Ac=k֜-׫χf{ZAˇFg1bZy,Q2*뢈"ՕM~P* KC_3yd-MDDI Ú?#j5Zl?DI%Jֲk^9IdgwhlqFXF17F+68j^L&-0j|?SS61 P#K/8A*Q{P(DIԓ_^(ܠmvDmr<S\Tuu%J $Is)):ZD trz4T;U$dWPCF9o[V ~Xs$*dIH&_l:DWdEJe(d}Nw߼ŀ]WWz7[+$_M.[Fl hjsq9_7ӫ_FP˩g0{ѫ0gbU_5|:2-ձBZỼ8=cGd4&T (dx9NF\fc/ot̢n{AKEtC _*D>" .CTZҿAWKh 8A0t"F+ 1HfQhǶ/F$$j[ աCĈgRzBv+%ce( :5O"*emaǵs5ɔ1լ*!w$RPtUc_Gg$nPwY6^GCu7qwfGx_~>z~M~:ǧwU+? WO6"D푝N+EPHYHrE9D(x=P)Fum+=Gf6;ZC8a@wo8yR7lI&*5< Evx6' C x sK.:Q`HC-/ZS ZG $WEIM3k~1ge`D$"D)Qh[NYk>D:b JSQ($,২[5/!%|cx7c<")kf<'E}4~.Y#/Ep80BF<h~0})m4 }R?h_M'q0-[iכȗ]n 4ES7FDZ,Q`He!,E( M"MYF m~\ceH2V_N?Nھ/g!*_4xǣ .yEHl</~>7O>7p@j{[;txEa{M n۟ΝhmSY"+JX"yb-"i}#%ٛ}(|)$-T@HMvIΘ'IJCπe6!y5+KkұM D:^Ys[qw5}_]{5dS+>lN8ּ=4UErɸ: 3`xpFav_^*%'9kﬧz;^y -CW- ,mo~sbYkףR.A^sl|2oFƳ:7AQ>=ͧ{"b#|ym4m5g1t{9V'1#[f蜜Ok>}J Տ2i.k3Wm[Og)v)VýK>y;ޓm@aP5DG^qTE2{28ӊY'"o:m?uB ϴZa+U"y9d,m-d1 r0FkaRӻ]nEj;)UG\8y[<"J$'Pǯtأs?m~yuG :trFYV|5B B1e+]'?ceg* .kS'`XUbc\56aEQk-i_J+`R0RrbN#ikn1oBPɍ36#5SӀcwC9)+k>R^ct5e2X$p9E2*՗ Q{рJ nrat*6(~L݆sXQ̓rSeŽ<MNNeԙ*.S~|wXE&oƝxv˃lcO#^nD{?VFb(8@lFsD!T8SQ^) %2z 55j96=b h2Z%{ k,W1R-cJ?[K2N뢋l!Ͷ#ic nn2\}8&o.vwLߞU/ϖ7?Ů {W@Icq搵S)M?!d E{qEjihdQ&Zۧp;$<ĜxaUa//Ŝ&Zmje>[F( BuFa,j{ kӨBmYp*uRjA:m! 2VXt䢒7`F[qT ta{8ME3STh:[D3["= OVdkL֓uI)BYf1eEmdPC8V(P+n\M"LI8hZIp[ĻEuďOMJN].vqģUppD{lo5+hq1c-H8drgvq*tLXaL؁CQ^y&r1bURpJяh~\!YE] 08Tm`ŠDǚ+)L$QS.e~.I>ݑdjG`IaPK3)ؒ*b`&QeUq1-|aͥhcTd,D$#|.Ƒ1s ]3wΑtvC-[ H *kϮݗh+ҞcH:-dvU{ZH(T[X`@΀@*qQ' XUX)-P6תuǘ3f$*c݆ 2>a"MKYO,UaR9jXJŃZ,d 3$® 8PԄQ0+s(Pն yrXv| ]l1dvD1ۀ@b.T&e="'LP 6_ B1GOK8-^.Яx]rp_g#q̎WȓUKޑq9QN9--Vࡕ gLjZ ;sk޹Sxd|z W,;N2\ld7g\sۦp>gZYe7Mel(66!T KɧI(mz.fV. 8Udbj|aqkI.ߝtm=z0j48v]k@BrC,([g[Q뤄/!)euX5*Efɦ>:b8bOp M)3ҔoU^0}0mg#,- ewjP֝+k8WBG`qU+:M[s'm0`Ai d| =9>35]v֜8SR 8SBhךd<0UtFEف q(bHCV=*5z,ri I)2&lYNwInRDrVe; Vts&s[>-yǡIp0!,|ȸ*ЏؚG7̚Ms54xֳw6K` ٶmNF"F҅4PIƱdD5%XU`xRZh-QrPF4 M |J: +ЏLYϺ H=)==svGP[yr ,3`0B(Ĉ7UCX^@xV* 7;C1DTr%T>B"D)3$ES\l5CnP+ݜsY~!'Sn`CG].α?]27֣G*gxXBҙU6fP\`-2JԸBN)$H,I?݉$݄ҟsAjak0mYbL U HZSJxjPVݩl(kl.:f%|рLm޺Bϖf*C]up8+"ƟFo>]m0 fƫW-˃_-U;1*- CÃyUľ.ƼpV >3Rk)}͔7[El(!h]9g(FjV. FE 4NB\ TZeA>A˗[ap Y"],St} w%tw~=\7/ ~t함j&!)g3@F6:Qxʪ\zX y^=:] ֨nJʌ)jg//2Q8eU(2b| `6Fqՠ0׿c"6Z,?]]ok~;f}[>i[#1Wb\'y~˛hͧcJv~ ,+u{ /oo!m7rȣ|y{}?"buX?ɹaQA [?bMC5^3:@ ybr[ç15k:a{DkbvkuuV#6o O o~^NLYm%Sݱ6(,lZx\⛥1+N]_D9΄1C-\+h\B1:ٻ6d*{@8Hlml^*HE(Vυ(D!53[W|g6g^g]@'-_\=x5*[nEPl#84cg }?<-݂!(JOJ`L5E{!0DMv7hPP1LaDMaˠy&4J(%qi㹳jǘ2D*B0 |۠"g5Ԛ>P7h\ν1$6{SjEIk\mȻZxoE$]It8imm7yϰ* *BSYCGHlaA8rYBkb9ӀA""2EXm/ߒ fT{a?FVw}eE w1*@X\rj+gy1,1ųx܏S(+t?&ŇTCs̹%yo',#UԋP@Orgܼ9)C(`OIHIƀL^'hOqhzSK%q`=!1 15T) 2eNrtt(ˮilC u~xUҐ\B4+- h>ÄTm}U8dseSSBo+хijY(UJWks.sz ,TTg^'~8?<Ζ!Ō`㊷x; AQ8/3ώhapBZF2[GbH'mahf^Xޡ"86 9=x4j=9869[GQI6W)Ûzlt$,P|1>+m.&hR3{Їb\eޗݔM0-c70 ;=\pNAlI4sr؎BȣIM32뚩\[*QRUMq0N~pzJw/="~g?>;|\ΣGwo!Ie#݀C|flkhi9w+ls-cq\C,'4!coƐk𕷮GZ'o&_'i&4#HBH 7DI`IT$J"m9yuҗN84ƥWj1w¥h ܹ$ Wn5<%kPI4"%qj,j xq"clvޚLMv^EsaҹgIXյ| ô,;x{-Ƿ&)#79hyR3-D3Oҡ? [,+%(R`Z+LF$Xg@bTÔR[ /6q%h %>Pt嵠1(4`^kwKYl/|WN`'Eށpqы_/v/.ziL&EչOyqPaBM?}:|"0Q-S9}Wq½v͔cf{:՜ëQzxѿ'Ȱ ;M'=+2-t!!Od-Qӄܰ,l>|,Tќ+mҳ~o ƗUt$'p?O_zOz|\-ĀKIu|!h<Ű(uAe_'364C|:# |,ʀ * wTRbj[$uYq(#@Y K(u4'Nr䛄3Uycsˍn^&/\[3|DQB/~ydp˪y)5Y܋*B 1Bkc9o7#(g\/q̦YaX0#{29Z+,dǖ>o^3M-d5Frl qtx({O>cubzjKJ!f@ 4)"X.Tt J\(RElkRZ^M?M1uFB I)-qp}iPaA/s晢XPk61c8I oK"DHPy\BTeL,q1pepUfp,PTW]1υb0hI4*ƈ ^1w&*(MxL =";UXzWеo}ŲhkZ?\^F _(8jأxgfU\BH.9N73A  =yciۘhU V ͔Umcol+nEpb wAb" 8o$с}2@HpEmUk1`jYAlZ]|1Ca( p ~(.8Dbٶ/F&v}l54_Z;oib 4µ@gՃc97z MVWڅT `Ő .O4-è`T|Ǔe-p'9:^a\> oC#CR2A48;` G8&\dajJg]w_{_V'?yTƳr'^e;\>`YAD,9 IA +#HH,q kw@0OЯUc s\49M6YQ2x2T/AS'C^[r PQ^:`#%r"4̓M@m$Fޘ٫Sbj@7F\y;ח.׾?,O3;9?}:~':6>ۖ )ԓU針uY_I%=Aޮ@ ;)@ ;®Ѣ)^@ ;N;)@ ;)@ ;)@ ;)@ ;t HaR5hk|XR؁v HaR؁v HaR؁v HaR؁v HaR؁v ZR؁v HakHaRs!q~t03N􁢼2?ة%[| ,nF_?~RۀmF%+~xDG)\hK=+5r5?:~U.d'*]tі-ekg)[o,xnrn kx[*}P3?/fA5gtN_&go_UzJ[㵋nԏJcqxJ 4E>? 3UvZLgƇ,-rN V_ie҆+:otZG_?E\XAG߲+<^"Єʅ83;M0d&su<%r…̚6=rt5.lak*-:@$2źi-eT0Ma?(6OjY6Kxbx(^!Qy;c{5ҢSp '5?W+ /?_5OOyxC yY;-:ioBN@$Ɉ/Eڂ`w9RuyM(v Cҷ@mVU\6o4B![IO$)Sb 8Lr'K L0Z̽ǰ:8e z {l L\Wֹ@}`410):imANRYDOLsgw cq9[ rP6U1QB&i01|J(N#69k5Ĕ"k53 \0jQ&U(hIFR"*nDn#!I-nC\A"!Hʌ1J'J3ƘIxi7yRpq{IYK JcpD 37ޟUR')CP(8J .l{ qPh@ Rp.B CPW$(4jUIIMR6# S.')1eКQF!Vpq)GrK_hسOth}>eLe( q(qVH7^F44)sefuJdHtK1)!J|n@3$)%R^+ wqu_P0z keǂ[M~%Ң[GO[5I6jMʈm^Ģ>ULsDd$ʴdSx 2N9%QX 2`  JKBD$-L@U/ňFFBڈՂKRnX;QAR`*1`2ᡄd=ЂrCڪhD.QʪNJL0dc1XmԭD޵X[9BrԆ(8$>8MG.=P η*"BeʬDoBrh FԣxP!6U1ଘE"Mej9_!޾Wz/&Oxi 1, !@%:ahRIUy9Y "`HٞV\'-t:K& H +S )2I 2 q8Rq3;l Kd_D/@B]^j^Ӷ2g Jynd B'vH g@B e! E"UqSU5J$Xu޹TEoQX$<@`1gGL ˄Ü/H J%[LREٔ|+p`R@gZ9 ` l:X2@}+XEer L#+%po]M匚1 J8 LseԸ`j0zeH 0|_i9aBj#2Z&SW{λ+(IHlBCFT0_(@H\_U0Vr&!gUV 1"@S[&]b-jI`UWuD 0 d3?1 nے*U,3V̪(N1ƀ/08BN^d]ia0v \Aά(o9mU28Ut)\]d6VS dPgP jV[/5XPqa=(9@i@2QӲB JP`J秃U5b[.2AmsEduK4К5'o#XVH^Y\0M`QI34ͫA*U9u{YAދ:,W]n$Lzt`_5פk;Y)2c%V&Nkq1{/enX|9#8=]&UBw%\B޽%x-7F ڕ+qJ^1TE$—9J,-No_[:ƺ=j6p}uy^;l5OV1(VCrN?8z&i=Cwzyiv-Ͼrta4)GX,*>j8|>] ~GOZrYDN=zlux]='O.6'ϵ/û,*I~(D9+kׄ7์~NNysC[7ο/nNh-gOnrgBf.Nq{A(kC چh/JH 7pU&Vl@.埛onM<ׯ}&jUby&\ bRJ^53F$&q5I\MjW$&q5I\MjW$&q5I\MjW$&q5I\MjW$&q5I\MjW$&q5I\qZ g^Lxﮜn}uqbb'c 9 чpW:JgTÅibR參v $FWEDK^C. E#P5JA,">cyzjīQqlN99v%9^!.v>}+1y߾m_).%.|zz I sH.0-1a$ 4ߥm=P&gM&OO|;˯ftdsA"nQ{C9e"ɣ઴G{GJ-)-2p/C3v65Dޘo}-ɬq>qa>qQ,.w| j#=43X84J\zɌboddt@Tݕ>LG-\*!@wzftSq{AXڮԇ/ޝ@<uW c)L c$B'.X[H)jkR2}  3;G>^^^X /rkoEa=U%ߏ^U+ވ72}@#>`Trr:!#AYUluPɛA8'Xbb`FPڗZKTrÄRKòʜb՚g]ztrwӤa:"O*<'>,02/wFq]{gXz%W$K{XGS{'4׎|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|lj|8=nIMuZM7oS)y?_ /&̳e!1KwT鬤uR?2H-d G } ^ba]65*Y9/V΢ڹP-^'lq>kcI׌-SWW3|pzXv)Plۉ}zy5 i';mWhC~}W匑Zא&DފY3ՕXcCqw g up޽{ί;@$&BY)FbW|#f);bEnN?(n- VqPRȇ FkQlq;fu`|/ŷrЎLNZA0QfƘ|N,8pTPD'ASttVPt=dplzq37zZ$3+~\=~O ׷_BE)'WRYWnӅ'YspXdo"ɂm-w =;Nn|a.l> Z=TU|p]!նjrwxdcaKdtMVZ\F\cꢫ$': eϪ는^LN<wx܇\dX_gBuͷF=>Y :2{@X\[p4΍Krׯ@×|递t-*wm$;RTXŇ[ [!n _%iR!) =!ErD5-n/VMOOUS=UO9$h0TUQW]q?*7OP斾%(at\df&նQ@` ܙ!MDE>`O2>a|ոI~41#)^$gB2-6WBPOSbXq78m2m>#,>@tF62&A541[CEt?ĪU[6œ~tO NTfMA#PƝm4ΑFAq $"!exta"VZj5S/O1xNcе}Lq̸@}YG7-Q8uNZf9Bۨ-ORuΩ xh3%TOi^hʠz!r1T!Ϩ5:FL1 K,7̕*[S : qC 4ddPLqr&ECU X v Ŧ3m z=eNh~Usԕ}/&[{Ӧ~칟&H!.s#JA@))9#*XM(S2\ť1hTA͗ס^S=yWS< -i4FǢm Qza z"'u'YﯾxX]Ў|a[&'9&r۩Q#NZ0+0Q:f)^՟y}kE%CC&$PT:\ d!T‰g̿_7έV<VGcΏMۼgu6k>N6 FT(7wn}FOu7`dzDxYDDן-uݯv*jLtݴbl$l_7f6)'9J!޴; 8];=BakT.~]w|{h$6Hw6'wEmb9?,WOZw8"Cpgpntn/;wۍKy{`r? {҉a#FAF]yv{I~@F^eG}N ȕr>eCl.ޝi oPYX#V?>SjLs뿜h]DB4F׊$儑*O.4qzJK,Q p<ͥLD'$&eH~ 4dd9ŵNUsVt霪4USu[Q\gqo?R~c^I|Ӎ ~|ذ_6^1,~9`T8Ѯ hـqɥ 1FT/~L .ΈbK79~. KXD+RRsfI21xGN6)Rm/v]9C"͉Dy &g Fd!.d'LjL-Hs!I)0-j&ޅ࣏TEb #.:BJ2:ej2utT$HBS(t"8aKM簡5v3GX}m{ܝKZLORM( Ĕٸϕtö7MO{E|Jj#Z{Cw?Vx*z/*P)V Wjj6§D8m}H^{_*#Yj gZ`8d 6X ZJ؄Ng,6ÞV) KiƑ¾T_xLM6#cՇ=w Ux|W;>MƋwSJDPJN( Y`Z*x@ LDRc<.Cl`;8nK[|oJm06N$F‰x {b9lBsŴXMzc_W XC5*C]xƈq#ZqQDLŖG2JQއ/8iA#dD0<*5!/YdKօ$caPamPβȎ\.$j=8ˉAh.ic(HG$̀g\ *I3P* >u*MGtQ;u9uӒc"ifa$B0FJ ^YaFj yNqڮS4` //?,8xv(jUOp*MvW_&QΕd?.Ǔѫ_g7yNsF.FmZ;O4v{}2}/л|8_lGjc~-ޏ.)sN=vcyՈHum2Z9k6VmFw;opF 1G}[^^*x|$4}Ds >,ϟ_5E'AύD4 xJ/(*.0Qw9hTS<)>Pq'ADDkdclHM)xdԨOON<3劗 Z'RBR qIż\A2L*Ck NT!HYfl+Vc!SgH{ϥ\Ь>NzcʅAp'[YT_FYZr3iI P)Q}K>H1irRGqQ;CaL룔f'b gfM[*/3I'=_HU\6\"Ŭ3CΩ2iK_>.(oAruUL̪`q{\jjdƓMD6Qc;U-f]W;Kxθͣ<Dfh4PLXȖDzEtDs<-ՒdbIC(*rr}" -0;!\zϒ@t1*- /lgŦ3Z $7{57 4IS!)kMvb$Df8ӎc\8G'j$"1H jw[M |<'FQ/9ZΝ&ǣ`x;n D:@ K"QWZŠ_NjT㤍w `n*.p,Z0$"6JZ !T "CjVqY1xzfI33= F"5B01)ij` ˙dH6" .h7ZXʝf+6lٔ1Ss3!,"fSp)Ӝ0h#k W@6W3Cp"jcΊ𤘆kuv7Vo'8il}uioj $r:!棾P>*C0KNA@ʉ%\HE uj61Bh7ωv&; MW Ed1}R%dNkIH6D,֙JYv;Q ś_kmn.\ $Nm1c %}Z"my+nV ] w5w:~\mwGגFׇlG}:g'஫}[7_ܺ=AF5׷:4on1\2Oa;JHȭ/sn|lU/WctwY| ֒XCl" 6D$' y8AvwҏK FmՉ&)_ts.ȘsF<\s TE0z/Kln@3mՎFx:E&'ZZe ATlt_>凛5ۺ/>>VC+Y׆]]WPJj1sQ8Hw\~brTigI[%xPL^H&NkmXeX@ rwuka,nE AD2Ҳsq>DQD#'Hb3~LW9UU Zr2rrK$1RHpwxj(84 `h vS[ԦOC-7 UH$ OIFK0&ւ9 9Sבֿpc+k(`Z#XKil'5*x*2@U@xH"L%m, J騸B{% :,tc{ԟy wHXS NaRLtInɂ2=ukm5*LdLUjiA*A"qr} J@ReD247Lbg>Ҳ ?nE8@!!q/&VpM$"# 8q&愮/c].A,n ~~Y񜰁hAbZe[ZI1(1qDMR|涟.ߔY=ɶs65,46=Iq"N dQm-S^HQ0J('7"sPS{|T80OHW|Q QNG7w0- pB1Vgyj2~Nϡ 1Uc:~?#(,>[mF\[_pɠ&44>T85Ž8>RpZbv{6_3կ+/ O'/ AHqݙK_Ab6\튁Z4+;3AȖ [[rsKۚaۛ976X^ X 3g1ja1uIUFnu1mnRlْ2|~W*%h17=60y,/W~M׺df8lPO ؤ]s4c Wo!G($Go?1g|_QϞ/ǽ$zܮz[ wOdЁ[MݛVޢiM~6ݟ5 q ш!ێ59ww?k#Dœă yRsLR$BZ$ABdyȀDEMADIImpX(^0:x!]FA%I0#R"nrʬޅ@a(:u*Jkk Iw?6Vu;OaBw,[o)M(I$&ҁ%-GD|&jPc $8Pd1M" MJ=>DN&["=v(~@P8ݡ(濿|1poz4Og%\o]oNâ8*h\ţƔw9Z{ѷ?rOw}łZԨSE9??lK;<{lt{~GleF{v/(wؚA<%4  zpfާś8T,w.D1S~#^ 'u6п-<&ŷU5k2-DaUqi:(/kڟ)|3|yݷ߽@dӰp-TVcDS=FޱR79')8'eBEFO$uT9AN>ITw:6=nwmc;'wё՟x.?ݚY0--57kE!__D?|HQIjr/Vk&ySVSoDz:?ZUwN,F`XL"NOΉȵCP !olYsu㥆Y-!c4"w̪`ѭG'tëd 'fk>uMY(}P P_;)8eojxUm1(o4}|Ev 5Cήlr6nssvB,\ u ז j^0aOe+ A*FnߖZR1IQg(]hy89hdԧ@|C{휫4 0D`>Sİ[:SB7RF26~t']s'9|-m$X+q.mcBY )H)C21)2\ ^ZDp\ Rp39yx&̗󁯠'S~7^mGaB^S?BjXԶ n~f񵦪Fxg0K8K6^fxkJ Y5Bq(MB"g* GQF9wnnurx@rjU_5*L#sZ9=t\ճ _,f, ޻WA4pg9#41Vǐ$"s."AJ$'MRҐ.ή8u-t@gCse+st b5gG*blMO9J A*8)(`)qp R6τ}Yy6.>$=폫7gˮK᳆_3KᳺvH뉐ƴ y׵iYRTƙWM^q<A'APE3r@БyG (K|%TQ@I) 4SQ7^}wK?r ?rwNv3o-Ci=;>?0H+CA~p Ih8I) \9We"vӒyCeB2fRe6rau4E]D/it0&?Xmb[ݹ_7/@$mˇWwd͜!wY> xj܌]hJ7H%5+42'Qy'{U&]{uU*ަ\IA o"Ĩ"+9A4",%b#3iS?pMS w$I dJ 2Rc`IDkCZ&$+ar;ĂV@),hOB $g] 70/AdSDfTm LL^;]ȯ_'." A&!tޚwm$)7]/IvEmy-HrH5<އmUSSU}^LɁ2DԘ(†#JDC)xӖ^Oռ+aOkՆ|jkgwF` -kǎqc6(c8AqmWǃWeJs@[i-M\ 5Eaܝ;ݹ[5ݹ#u.v(=A(sVh 1*wQ0e]"qhSLXjXq6=ehSJ#H(Рfkhq&8R&,d@\BZG@l *Up7WCIRëw3ηvDt1z2'sxA#PlFʋe DN&Xg(B& ^ %4p1ٟ{i^b`!3YɥHA{N1If-0BaeTEJ';NC]ΟסOo*/Ж.vpohֺ!~_a8WBGFiy:tDb~6G͊q1 (ms? (ID`&qJmL OɌ(Mq?<6}sKB-IdNj S t\lq%Z!;2><[ {bk 7*HvFcwe'C}sLmXG:P0_eğZYTuC|kwG/q|Ɠw&8=tyj:9*KfɵHd?dى 1K$>: rk\!y?އ> S PT; nW^ċ=A_4yo<3YH!<ڀ"p)LYyLA[2hVq<#1g=Uq}&*NX`-$|tx<>:N +]o:IfjZ;SjZ\A_N÷K6~DZKO.~CyZSnMXZM),\xl{kNYre\wNVS:}-cc5_t%)]nr~HWRmO>Vl~Z0~gkUIkׂJA. F1<3(nx«a [3bRvcQwuLYo'/ JDSϭbLy{۰I`3Qp,QppI.|j9O[LW*pSYi|t[]5ԧ/On#;mh=qŷ2}ߟVi=M4Dt`wFĴrt6[ytc;{ORM?mA;&h=jy2a4&s޵+31VDI/ݷ$ ".[iQ>Ǹs>z6+g^߽5L+C"ZV2{NĘă F9BKi܏VW?T%ZB OșJ(6ɁY4VZb&A fv\31U zChEW6'[HES>VqRh?i<7J ~sI__ָ:.zut~\O ^F5h9o|%j:@H x](>Q92<)E*HXNrHYϹ.kV\J.e۔G>刉ʽhu#w; JiUS!{H{h6hLC5JYk ,Sc )$sL0 %aۏoVe&Og"`k%bc?UdМ4jГ M A9o\B0 UFH0*cb.HDW4!Et))82ڑ:#c=R/ֲ#c!ʱЎ45˛;9W_o7y]|[51ӓиb#v.C8((9Ĩ#,x!"ӂEa"yo2-@F(x/@ Jۨ3vXʃ-EpfA:#V9]:ڼn>k!dːT ed  6n΅'U3TR2qCU!]^p2#eȄ 9R i51D&(c(Nڹ:ao,J`JDfFDQ9"1"q~,>(cX.$ -HEV͸x;}7e;`4s' 7~ǭ7+M2ŔnkZPđ~_'YbF: vc pъofA*FJ 6|k&,}B/K-[,Cce4mYY{okuLJy3m~P9AgUc]M9r Q?"S=rۇlzR=|4~bypUu}d3ѥY.f;(F;}y@=s5k,H`YNjA*#'{,T=xz؍)<*{_/wAOYlS>,{PR[<, {.yibV6jcN9h̪{nf݌1SuOk˰f=8bg2d[pyȂX<bie.cCTQ\f-"4>TL5$XJ[Ms""A fMAK7R%9Φ:H( h $t˭.hp`8S0虍u5:(5"O!*WIti1+xi/ѣ+} TSe2 *[*/Lw*.NCCk~EP|KjѩLr-ޮO *⭠^X&+GEAjp2 .3c2_'i]Hp1zn5==j+h 1a'[vR3AHٲ$ȎLgeZPs7Xh(#mD&%m~VM~՟Cr3TӚ])ErhY"lF2c\I>QRPcV'p EG58죝gݖ,ԄG %' & J&7X ~JƔpufqRw(1Pbݡ( i&$.7W ?;R `'pF_F7ѻt{ގٿIs6[44)ТkzI_B䭑z>!oL|~s-|ٱd,zGyk v`?%im_)7ג~U[JbJ!,UtXa.#8B-s1?&'JŰ[8p$]$GR?Iw8MfZ&!.3,1S5.'xvFnJwAg3݃p6@Q0ڧ0y_d-c[ ֒t_m=z9ѣ]>^Uz|rscAdFJ蠴T.dN)AB !j[ޤ}e2cJ\A8 ޺ =Z@%P ml; mO;7exz嵬/i}:Hx%#zʬ" $`2[K46$dGCC@hS+sԡ9&qmNot $ Dɤ"$RJMig $mҠŝ\yl&T1֧Af,ASѰ/aO-Mgk| 0z"*D$u;DcM'%o7ki ZHd y0l'Rdv<pBLbZniYvEx$h2Ø!(kJAf*WBFuSؼV(uM7۽ T 1mji}Jι@x/{851#A1mKiTë&/wiHTcx_.*0Qlijil.Mf{?b|Sj= bb\>㒴[rKdehtR燩ٹOYWCs %zYiL ȗ6w +ݍh@zᤱINO,J4/Šc!g9z>U((Ⱦ6_^X܁҆&aU(͹[<(yIk-!itȒZ{b^_3կ3o.'gWنXv?;=-خHh ~4g8X n? ^JƶڒT[:֌ތT[Y,oIc E#XFA|6{mVCnku۳j-V^޸\FJÊ߇pW%j8o) in p\r 7C:i#RL{R1 Vqh8:V荫0&Zze-iktzg$${c~~ՏoO'7h-<2I-]]:0#cn4_ߴm5M͛+Ѵm>lnWq#oiu,qD8 !vۢ"A[ #IAC^yeEPQe9sD%aI3sqd9u< s岇 F=)iYaɜё "I)@ǃy'ރ:u*dks+wg%gBw&tGA YY o)G\6fZjoyY 4)06~"tDϘ/U#?/{SE].~鿅aR0]'3(WjAM5; 1u90xTPsDh I'J8Rƒ!+Rw65 ovuNկx.ݚEj#7@oBӻ ?U)Ѫgf֔^xAgMѻ6724|rD@EVңFt煖ĵMH G,lEK:Zu?𣅆1MUU0֣bF謨c nSnX7mo2!JDdF i˕*ͱ K a;3 G;j{!7N;?&ib7IDS.XM ΢\۱b|e!1TmPY$Vꇵ1"$˙#VYEn;\FAluts@og+u[sq0b Y<O(tRҠue}"StXUߦAn}:OͿëbFǷ$i?IXī~Ipm1VEmSѵlJ.z"H|J_ƣ_TJ2mkjkK Tԗ@|C;;~[visC@stAhd.dWFw3F.lȼpIßAIE+6!ʞmB@.ϕL1(9Ѳm"8MRIC[C >GjYjaЀrVFkލ;XK6URT(I& $$r5p3z^0)k o+;Co$_`D=4>g_q ZCw5t6ێ<%pu~iyE  dK 26T;mA`A/E({Q-Fg+i$yb( Ƚ2y 6a,+ QK%Yf- ֆ,7Om1bkݹ#4>E--5ule伫iZ/˪b>4 :$BR{Af)E%kMj3g+ "3Cy@Pck[ +J0߿*v (Cƈdg( D(A 1t:jvV쉻F'fwyWGjXms)&߫xE⎨c 8։ fQgCS4Mc$1 |ihӰ@]ڎ|0go7/}S;sޏtu`䃅7#.&E5i\H@VQ(ӵjYƲ90/]VCGZm)6ʥS$!jNL%S@NFδb>drʱ'ЊO#;H>x=yqG(IA/ ѸQ+Nj/ ڼvic@>fBRiRNeV rs]H1C@_i绡v _x^N~̶.z"ŭuuጤdf^ښ·\LIr$ЅG|rq1}uc7[eiwb5r>0<+IN<6"s?YIDbVk9ThhB2:bK:2>:821t:9εs1D\ke )E ? U .Vw8+p8r-=[o `)by 1KP\ȇ,6Ѷ-U3J+K&`k k=QxE;#]OTQwEU+̥W"YB0KΘ$'2ho` șar*]vj,iW`үӊC 6[)\KbL% EeBD, h,8%q2VDyk9Q=$#Ki9_rh1Nu2I^3m6۳ei;+ ]l`4}J?(yBhtl!Udr CKWt.:&w5]d9ETQP^Lb"h†k0(x݄_([.>mkd-ݻ*r~T+ر [3a sl7k2zaK9NtQj5Fs:@]{`۱/߄uBJ1X3p$iEѥdQH]Ch4:m٭> n6ߊ.z墽G {F\0`fG0ܬ/&q/%łϱ44俧ݚw4[aSb 6B&IBb" @au94>e.|_\0o.ӣ>z, bHuotYA9.`{cjs:#c nBv{t 1 :A|!VBWSVE IA XꬋcMooc $yH_>oaxTGH1/Kbꈧ,Q;6q:9M*IbCFAMЁ" Q%}W>SCAtL^[c0=AVcmC`"rVt0NƇ= K9+bJvI0^on]ߗ$}Tx[G7s[J=JHW:#Y~AёIF4X ؄lC u?^zϗxN7^ vz?K\ZϾ^z,JU1>`BQ X)RTSh}NXe"PH GbP<"@2ƾS= ʠLQ4 0߼ ]ے|Eh7rx1MզB9T@"(W#2剕2fS a=@ăcR9i#Ccj&F(kdWAy:ƑpVqzgHA7f!Ss(+Gw5bdgHSkv}l<dJGh~߲נomi10fc*6$Q4g7 \ȑ$^p"BOyӸβ"J)Y:"OpL?N ^Mo5NmR٭XkL"iS6ve7IUNy]NnX@,`Rf幏wm|Ϣφ.πrYrym&]Sy_ji0OwJW[N;n)U|qYK7pyFl.H FGjA/uP 0@ +/xF&ubjSO~\!4z9?>Z Ӗ{%m(] rOyAa;H|F))2b9T;pi7P?mσO ,e^lJnͻ{W//96MWiћX;-η͖ʼ}t G7Iq6mt:@]7~6o_b&nQ3ȃ֎}'cgܗ.7j9qv}ӷz,׏O<ۜu\L֣E4I e!g&:+ ǢJ?d}(՗?im@?w2>I +FG%ɑq=a& +Vh\<(WiWIFӓuv|7)YΈpX(O:~lߴ{}q ~ܬg|uZ:li tm3 @J(AȦr@@m,2{~w[PBa% g}YDR\ŊqlQNrNA}j^"[@Oʓ9qZG){蝳9 b\. BǐR1 )B-&KԌܫ&P(f]5ԟR2BxXr}M~~ӱ~9f=ʘ\+Ya:,r[[RA/xzlt7/T0$xT \ZF:,g1?b=3T8ْALƹTH,fwL>'W za=F_f̌ip,dȅJ̅tX]Te<𚳻خ&o>uy}bgg/g;gR9T&2)ydhǘIP:cmPGo:qK7\}aб+kø( 1cYeTE̎=HGU6p–˧8Ssglt1QoY89 2+C@ʖ&eE‘9?!lG3g3N:ʢ0hؕȌ(xdGPG!H*LCLacJ$픁HDVʈ@c}Qc Y*τzT#fo`Uv1'-*oc=ddF͌͜c̋u> %"̋ȋG^| N $7BT,znG xjvJ1]d}Ȩ]:Kh>(lænW;Gn|x؅Q[Rя`4~n2u`iB˴'h'oֲꤣ7$mϯOt{W'0oVٲC~46],|EҤp?L;G^Wp_/W/1PuѺ|.VӳvJp˽O~USҫa5cپ/,ξ|<5tVv׀A_SvJW 0Y^b|i^]mony67Ã.T~I ka7X yWNx;iK@w' xa@+tʕ|f sA<^K)}s`{]C`f٠aXOљ$z,.7/pWd|b9"{`TDUhГaM[|IQ"G,?J(g̻qG?S;=vAh'rQ]x= D*eY_{Mk}kmȝ6mn!oNگoO'-oOjgn_2S?X-.e ~?qnXۥJg/kz>J2V=+ZMI6J+7R͹J[ 'om'?{k%ٷ+w|BR[oc2׮u]>)!k2 髹M*RQZs02=;5f&&I=:[P37r7{Bx.߈8^'u|+gr5_:(2)-.j-:ΌOvJru:`g%PJB}w7evvzyEܓ6CqS:p3n rڈ(`"={,p?Äj>t1VOl79Sn̵;t Qɑ5 }@KtE_s%q*y I0(2e ]`"x|ɹzI75Ge (ƇwE>.yr飯Jo5w|Lg)6Cbav%X䵒:U( ^hI i#otNw)XH,pJ 2>m}IƤdQ|cbЊd|o 56hj7˶XdC3cJzZnZ>qs=|rKhSآdљM+)-΀> QxL2ݧ6X4hfYt4mBfQdG%ɚ3L؀h`0#|zw9 Te@+ R ,0mbBM21:B(U0G6U`5΅yTiS)ΖH;7\gRdL&lhuv&A#iOyu~c]edԊkZdMQ%)xm\Hd߭'F>C PGU+r(hdb9y*((0ीnly ZǾ@nL>pz2;|V]nlzZ PT`)XeJ,cKW;<  ʱtj\E2%VB,Hih94R].ǐxdYaoc d4Val12YE`H ?{ƕdO@ں| v;deAp6YԊSMeQVL5$6쮾n9Tn3%!2xk#4kqM0LX;"p`L ӌ/%RcYLu>% ZAN:2]:∬ @M 4Ù@HqXHq;+x3NVcW'E]%A/[.QM`+/zը:dŚH 섈 Ք&dY\d`{4 {xWZE5pȀgm _֭]b_ zGy7IcLQ-XNR!&Dh]Y;3l=`(W {-\uרkW V=Vw!t$M+ o `9:pĥq@xc#PYiW2Mt%Ch|pL !'Ze&5:/3(x$ H&rZ5ȼ`>xmBV58 eX1Gi /!!d}rۼ=wXIu'K /ʁ^A!%xB>hw߃bRnD%| \:&^8una,C)9iDκ$!;V@r0]Zx _3.c &$˽e;VAa:(ڔkXdי4Hd(6J+12PKۘJPƁgUkU;PaRB]$IY6 NX)h^ /tXI;,Iƒ5$BYIe6VӥGoU4Ľ";oR Q4yZR0 z0dkRz̃v;kFlh]1b6?k눙$KU#[!n[f= ]' {% f kwOSp$JHuܵmT=kM?r$RVO ]4a/wφrR{wâDlu@mk"B[uwzE,WRX S :( ,3UbFz DewkzD-EcW,BrV$⤩Mk~d)@v9,RxyŴ ap`R'J-*FjQDF"5WY;:@)E =JW6"R 1t6lӺ`fav#5zi&z@IQ{/AFL(H6UK.duN^Y%{P׮BT`joZajlJ؝5F-6B FxIX©:lti =!V&Eυ4S"p#fD%q{*: FXRJ C\1F!*knLCzul)XKC$ˡ옅G@(8$#;tb驓kmX~:COW4<<"B.8! P-~UE?7[b.u04> /bSr6v嗭k2xeTlq@n!qWcm6y+kWE~Utί߰j,ܣa~ T}TJ KÏG s?%J[T9a%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J VUy>&% nDP@2ZV}J@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V}J {TJ (2sJ X4+A%@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X JoV X9"% VuG"k;t%1IJoQ )@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X JoH Ei=ޫx]m~UPZ],_MsE ZTrvJi1 @^,Ywg^d\W/-l|^/\_ҡ߿,wW0ڳYY 5w󳡜9{|q&lH-ZİtVqw/@_\iU'Uv1k <)cf6n pyfefw-$,$B55-5C AD~$˓I~D:YϲPUФ)fBXĭʾl| -eA~@?mÒ!}CZ ;VƬI7[`7a#{lN2Nc{yod&iaxV;vӃƬG рn6fG ݗUIQ"fه|fIv1xc/'P_M9~nfhY U`XƐLJY s$\̵ uV|N͜jtn@)4o;]*r_}]}EZS/潿PJ_F__3]/O;cY->q |9l_#3BDfȥATͪ%[ח_9[Ax덝67R:<O=s=I2U  .[Bw,Ec}P;P˦MլnIXeH:䞱8mΧ}i^30Dors;]?#իu~wKΟ.湭[]"P|"F3]\jh>ma5Eݜimθv7?V ''5V.*>e/#@tj1)e2%^1KZv/6mƦsc+C~AjH TGYΐs9/' 8~#݊.6v?:;C|ѿ]޻ӫwN>)]]}8N.% )gRoc0ـpٗF/g_ NX޷nj|KxZHoMiӮ9W҅WRWR/Şz)m y\IHκl2Áo*N?ߝ՝|` 0ŧlW(zcV rO^yw|qڷ߾;Ҩ_ (.S\H?]R䤑qkF̹?2NUSy#cȱYc"+B'PgwviQ.Oy@.p:[vGNVU+J)4kMUZQr23hTpg髲MI# T9+(lj 5]*VDu&ؓsE1%jOrڨ:GmF;<>e݌$~`dYŃMỒ)vjk6I7?i<=SҀ!JC-t5TS8G(|rz|J̹?~]\jx(q2xlDTGD#]_ 5Ƿ"edrQ[79ik6+&S|0"NfۆΈais2/yl\4Eq] 9lIzKMLFILBjZCņq9ɼPy\<|.~\`cw7(]Ee ̦J`| >..T[Cߗ[Ýr78ݵ=s_ic ߘ{[ak`"1_ 9zG|9 V#\ : xnPBռ;w skus؝=E9{w*LຖIv1XԺV)IooMfԍۣKtOV/.T8hAAϏjلJl%ꮍҖJ^ф4vC0.5 phWB&CD1ځG u3J+.I)Xk\ hGӇhv-{9cMk'-evbQ$&"]37>jBGur?ӦVZպ>KpkϥGo^Ks+GôTdNI eK!4d׹ycNk1WצRy~*Ss?R3Ҕq|Ec)isݳ{Y#KY/wmHewhḋlr`2|%%'_Ւe[-)r+N&DMdUbE,=`n=:ztdJ<:'Q.Y{CC&-X6gƽכWje"s`=+<. KZ6PR2MGJ<(c"ΙͶ [!ZXjsp00yZ7N(ZާhRv8|ƃ+{gb]4'o$׺{/Ҷxy)cv@[;JZ^Y1ۡXt;AɎOR0n!-lK^qYe kmP)Y C?-Gt/3vI# }f{)6+nDIqvgTXR_2V:1nAMзdhL/gv)!Ӆ86}.El]^zXeXY+69i.`XOqAYf@ rƽb(ؿ$ 1&#<MЦ\f2jƄT1q AttQY䱔V h E B+'H<7+H[$wAt:>^вBUtLH{itƟZgZ)~G3ȹT1p%"X1ɤsƒT[\#P9!E%W<(ȎRJp! zPM'c}j5$#)41e52rPMlF휱Z`#q#O 8Oo }*q2MAƷ1-m.1-I΃~mFn״D܉䛺_GֶGʑKXꑒ&^p*p>,m6ZtQtU ]d>oWs Kw4r~{a''u uTKمPJë#  BH!6J@6 _ mF|=3< f(nV2s̓A؉ \e6]^۩5tM\uk yS6ҀSYu-0hl88%y>VMMOe#"=~h.jxr'?ӈ9JoqyT]"VlJn)1DWv71LW.7SfwM py5hwCjjE-2LPj;{o$FmnP,0y:g;/NV 푻°o:I|1ttY1 s~}8-v%cK.(%SGJBXgH6jp3E`$J0*Pw[1oHS)-u&}C!H9>%tGҟຮpТC);tRĊaW P=vꓬʦE)Ip&:F rY dYLXd4O ]Hw%3=۷]Jp/gҠ Ȓ#Hdr&3ǕӄlYGd{fr^zw|#@hd9q+'\I< K;72!d6۔uaбu6嬖ɽ > Zq*)IZqCFha=d )Q",$ͫcNE1z7z=-#cbAj$#>BК$EJB]= j٨z;#y*C^Ҩ)DjdUdK%} #zqTG{󯷵N0 LhLkkLaep FФMnrLh{i#BcJU%u"GW{ ;r 9sRbkĉ%sEHY@ʫh{P<` ˛c8d_KxNiXaK 5&(oe $J" 6y`d&s lECs@rj1l8;&o ɰ-n~y$tj-R<߁+}ƭY⣛mv fޝrVlŤp{8y"2 bG;=aП |xq;VM|puܮ4(6jJp5lxr;dw32B8t1=pn`t.Ysw7vW ec\3{BPU˭͟66*!ܛzmα(D\f2UYG|oH];o/'A˓Om5tf&lD9P0 w 3ɔ, B٩ީٕ0t0!wlw6]4z& ϽǛj,tS. Sϭ?;a`. Sy5U'«`ΒZ >7ˑHFk)Cu~1l:AduHOBV2T6KhVqk4r8n ˏ#(,J2Hc@%ƈdgYK\2NʙÒղ;tt_)zLnJY+y ndY$U^ @։ fQgσW>=,fe'=xU|kt=%X5(2v*/D|[v}"U *r6qz|y6X3ar:U;-կo-Oo±#_c z\#F^ ^w6-y W@f󳟗L;s7a(-Ӷ^6TIwV鳳bGAZ]qD=H%&dD5Č 8YxU9o}JBcp]VzM9.F  &0|?(wz[,2@$ziBGP;aѽ<"r rÃE(YXRadɖR< )AB !j;N0}e2,44q ztLB9"] gGaM)[{9I'g-rMrY ϒ3flTĶ$Ll-!IKfoxN%F?|vE[1u޶Mlru:atTAX2C9FPpnH"hl& HzQ KfheDxs'Vº[Ḝ5T> .z"*D'uI7XsZ7سfznZ[.ȵΙA3y [ɆW*y2BLb紌>ұ 퓾 i!#%\NMbeN 2S 28ERK0Ԏ 7 4%mϹ |a)=Ŧ.*4L> RNW4 Q ORCWIs$qlL>Vi>'ܒ1P2LpƁKvrf4ptG7')4ªyi5Kt9䀞Xb2 _Z?\V.ohBC}(\Ƴ2|8q1ŁCu1 .]HYn^^|Tq 7K\IةuՌҜծ^^qN[Hp jΑ;5ŽFT̟P?zv|z A\߇<vnW$j㟮g/'wIZzBn[{j}Om݈njfYޑ 4:fb'b}/M/dkmumbeű2R:V>o'Wʃ >'|0{%ZaXj,O׿?NC ݺY䷫}}KRRqqW4Gچ(АX]˨,8WaxyNB_|?|ͻ\û.h:iG @G Aݻ6E`]# ޯWn~Q[.bB,a@[tÛr {6Jܝmh$yЀX&^dAY`Q$'Cڨ眃 r)dXLLd9vv8,Z8T>'&J 9;$ApE"#TqC|̣@u:99*7R`ӝgqK 5yk$7 ݹu,.m|Q0,a6bJb 7V<VB Dmx<ϻ?Yox^)yynq,ʱY ~^1xMv*]<^̮=Ƌg?>LXg}җFn/.?)0ګ@~|%6k:r.!Uζz>)z8QryS鿎6NWZ\N" {8ڽk:@PpGL(fswp$1ƻ.PTBo N-~¡e.f}AĢlId֙N"%79V-5 =2$_[KO9fBj?7onpkgTvoVk[GaGo zK9=GU.?{b4 2az*9/Uo3jتB밺D:UYUwCg_J9ַ-nmo[m!{UN8$f\)U:%[R KI@A8b>sak/!wH 5pbM`%+bγ[7AwmA.yM6Dn5j%[p,_^/:q{=${! qXT:rH6$d"%f"%LBз2dlgibK[6܀aC @1h q(iߏ0ŃxYl7X]{q+j&>D"hduU-c-쫼>705&﭂٭X%SSH>5ne<2ue=/2dRbp.fLQbMr:pJY`ím] qIbjHr|:rSAq5g mLru;mgkчbeCҾqk4H~4]µ%CBmSjJךRTB R]Y'zRh%[ˌmU R Ev:]JvWo}xm;d̔0"\,&IK$/>P 9 +#qY[:q2JFSeܗyƖY~f_Zs+(HݿI鬵x E/H.S%Z8Ⱦ)rmp6~ (2ɤ 'g#e vJRIAt,zW9o"Ԛ )L|Nh+QJG"Ykc! 2d#FZn >mkC#/0g'kwXXl}*8@̯15Aw"ɴC V] GșH{iU6 ZV|M ,(wOa\8_}=`IlRr ȘSIaRKÒJB,c\ ZBV=L:kN7lU|S.6pl F}!- PY)9GQ"WX 0|IQ(EHo=ݤub)<_@Nj8]O<}?*3F@2E>¬'JW)~o,v_\ٗ0٩הLd%rI2?g/gEϾ@5CP}H^M9~L{_fi8;U}_-#R&V/QY^3O ];]+9{ -sqUln|M'զwx4== \se|zM'(d3FiYyZepl構lpjvF˵o"25:aw[Fﭿ0.x=]*f6+.VjsV|DsU P$T^?1WӲE'@!5?rnFi)oX;Zn!U_p e^JmXI^jҟ4<]D Rr,(O r7xsvXjODVL3-N0œvV\{=ߦ@NYqL!QJՎCH"&)jtN\\}cmԜmWt#Y΄UOj)Pa VI+0eIܗNCAbjtN2rj!-T1! 8g(:deXZg|f8j9 >x&f,c]NEG072DQJ @t"֕gz$̬YaN[ 1'@mB"J";PT0 ːo !f,4JKJT+Ȱ\*{XV9.";pìÕcSiɇ|v:1_2d}"jh8AY.В.xmlE%~'DQƸVxȉPG i͡jj=OXmV%W^0-;pӬCD3s$$>C Ѫl26`x$*8Vb&~yTB`5r J6tDT` Jy D(_q/'ZE'd p*WX>kL g&MȵVm`ŕ>hFbVev HI!;&`ՀwC 8MC[m>Fb!#W4AvuAHD誁':ʪdV)@dۉոׂX& 7/Tg%' [%b:f &blV(^gV1vRl nRԝweL1[U\qJ""V瘌%IH^68VJ"+@H̯ ǭQqBI'VV  ѪNLo"V3Yco23!n`EKrP7v9|suǬ($衢rXk*9t9}#L ٍ頾7E+Ҥ䓖%v~4A!ˁmuZ9=Ƴ ^ڮ)5Tw%NR|Iz S*``<\qLc M2z-0F=)x@id¦%o#B_6z? nf &.0#ͱDjEʱQHQR˦W-loU{:EB>X\8L[vc ~Oyq `1|Q@VǃV6Xx n`mmK"YԏU_~(Ӭ(al2JRp `|;nyr͸sX'SaA[w5`QDD_b =N` .-Š<$ .p /룫#D ^H:A- Jp VhAְc[`pAs^ πB.dsfjy=(f ƌZNmJ'0%).vcΠI`&7h ` eDAo 2Zdpa,p0",0wϢba D{Jhƚv 4]v#, f,yՠ6˪rneVn5+H6 "kƷI'y[%Taҹdݖ6ۍi;kFٸ+U6lsp X7C|XpyВ.? VXR(.V*d@y!U@%X5@>N( j=c5X߰^#úec4Bz_a+Ep xrpl%r 4A7yMieNPhQ0RXjd'UA`5uClep?{ ڰ(RVGϊSZ]FʄȍLmGjt*\#|rXXH#\5%i\4 Z`5cv\m/vMg TfĨ:8F 2m7PX?ȟpHJfP z̀= }!?HX#28=wV;͠>rI"WŪR盉0K1G 34+&f-"0"%^0p9D L*\c.QީEF8Ø`jUYj_N,nNݼ&T:0>ˮS*(7<_=eիKY"x/CΈ\!~qZWVE矴^Uk VqqUPUtH (9ϪzYQ@@JH *q"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R}J D{|@VQZm^ +'%רRT@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJW 0sR)x>J ݳQpJ x)F%=E"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R}J 1I UN>%UVJIJQ ȃ!%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ RQ}rހ&yފףWvԴ?^_6/JɳŹQ^Nz||쓬faC]3QlI>,?g9 Imz.*l.Oaʳ`8ͺtҥ7G{}R%k$55ߍfrWG'jS)<7K)9z~>|='畟qx3A‡}'-]ޔy!$<3XxgfT:{_և8%uO\gd}e`T+׮` B)(Usy6=-n1V/u霎Jӽ̛G\i]v[Y{VA"D,wj%(og5ɖY!&e]jUTb`~:6ոws՞9Ja O8iO=R Έgh=6Oe>Z\XʽIN !߽y}Xo <^mtSok1wx}FM(CĬvO AA.  6-toXs9]+ŨCI[65=<+~RʱX5hͫ/_Yy1Z/Y h;:+ SrOw /Gl߻WF,g`3a76Sǔ0<͓>1A^%GtO-*%?K~cqR^Lڝ!KzoUڄfe?7vqA[lߑMgԼgnu^1߮۹b}WX>[wZaM؈[([dcbAwk~.NNDZ_*Ono-^/6dҋ`sh58MUByEJҿ^S4EgB&d+nxL$eE\+T꤬Al>`g}\~HafNrT9bi@,\m>n˥)J$( yuwFX4G2QsR}Q^Y+(e[AK`P+O'OZbt6գ8sa9_jsj;>zd|swݕV%bZx{)E ʳlh2:SqkF GG&qi1q[ޘDsP:{{Gɾgeٓ́nS6EYƌwȔI)>UϪ ųV%0%0!Տ)چmw١`.rhGx TsUE׊Ak 7p.gt+JgЊ ϼ:; c ƺ4=x(1<4:!MY[XS8= ᒮ[^Folv'Z;w~6]8-}0]BOZu D%{ϢK8l(4aW1K.K;Et3Z5:9y39)&<ǰ8T_apC>TGL8Tp[nsǝ`QpU:~w~!1VVo27_vepnnptB7{ u*ҹxtM6a?IK$]!ex T{rFZI_1+s#]{*Z{8O }P>Su$郙s7I_"n%8na­-,\H~ߎ.'=vXËJ}w1(_o?߱jUDُP{ {gjoWs_ڄۦAgnMo&_*#doC!]Rm-0\B'.X{!Zdddr)<ߞI;==gڃpڡDaqf EaV~b?ut"8U)s:C/۳91xEiѽ:]Uc9ȭ'Q](Y8 褍>r'O !䡥|`Ij'<…`Y*S2:T =/[br[,oYVC42gOΰΦ8f_m]雓-rpE^AS8cU@Oӈ0S9TTȭ:ZJ. bb|.^7VnQO|zquoH5EXuV`W:yg9]1Rʈ;# o<6dYk_LOO+Xֽ ^, 9(9D ysr19G'"ߪi@S+CXEiNRfk RUXJaʴ HzLD :+}L9tȠ2]$BednJ]+SNֻ e4k; Vsii^l1+^.'mkǯLrZ. 2蛺!k`T}f_?Gn5\y/?1O/r<Ѣ{^w1k>V[s26eK&k2٬ӏdz9qYy8UJ*g$\h&ڜ[r*LHYv"")4>2`mmb*E|L_ ((Ⱦ6_^IlJ#G'8<*jrvUJ.WVP&͠T3sWizu5M/͉goي̹~l2ތ킃෫qy~sk`lMKvB%P[:_׌X ,N$1 ,XVe6У6{SU6:{u{VzHiX!Uc^ 'U7 in+귫?_z0_f5_pqA bg_hvި [ҮWmxTM3G &л8#!9z~z_~*?Ï?|xˏ{@..UP+:p𦑷47kZx@Ӗ]Yݿ.kp' n!^WJh#F`)L|vH!$8BD=\ ֒AU.%ʹKRr줧v8̔`9$(x0a!ggI"R*;Ʌ<2]\AUlmNJDw}ժ.I<+`UaһIݹq9R2,ٌXb 6ڣ25Y2S2?dCvHf<-M$D=`iQ(BrJ R:[,xn9PDVA_`FZփ-},ǔaN h%Q.E[}u5_Mj͍7e蓊 =P)ˆ ީֹ[}+>?TEUs謪+Zz, UoϪ]4hTտ`x1;r\8 ǿ~y?UV~J/qCpǣuG'GGٱ9jf^ ea88\{.Oo8;km[H#2Tb!^e8w Y(zpwgizN{ҨŽ{٣OMox=A'0c5F'!֕h!.4K>gWʅ?zE]. oBx0pPffS(W}[ӥ; ^1*8Ʌd&82)*V'fAlH:9%UX8LY3Px:8)XBn>YTFa[KzHbןoР߻j;%꿫ո5eW Q%)zx)7NJF螃"{ULifVңFtMH /cY؊ԗbu~'s 3:hDn,`֭'7f-ΊZwLr 7[),-n׍ң\H|Px8AD i%2maRb49^ p RH*-2s2Wt#)6 s̈Fd$Q l,xM qBoDzՁy9e!1MvDH@11"3$˙/KV*r$Y{^n-W]aX8дuJR2^#)#[t]IXd }`pښW@/c6{)N4ĽpSl7XU @=QFTKjLוLMqU6 rW?y7| k+fo($_~v4ZgLXFoUSڸvjOJ T^ʣ?-@HٵaԐּ H(Poh$oow\_t6-fCQxJr+yvт,Gȕ'n 1"e6Jˌ R Rdr"x9G DIh_XFM9y)Iz%R2Nx[)z;$ 'i&9%;eВ9ts ޲tVξٓ} jݾфsehÊɒ a!ե^MML-:`CcR?*+ HڲdR{52$zeA`cJT^giYp n"T )Mi],+LK Y X[cL{52ƈu~H -bo%[^yǰD4ohnb b }If}7 d\kR>;_c 4̴IMRESA xGtk%5@-x% ʒ?(aE¢7A L❓hЌ '3:)wF]A7ϋdmCpiZBY|H"qKc!^ i}Z'.G͢Ξ>i,fet)b]HuMj;ha^tO7ov8&pM{o,0]cZ6#.EFӸ"@`FTյ0Ffٕ́y$⤯wly[oR=l9E2dH"$4d)I"gZ1a9eYh%&-wկ}^hf^ܓJR@C=>{vfe3- ZvgOG0628Msk@YMˇ'DXȺb+'[wCa/޻p.aG̶..Ғdֹ֑ g$%Sixt|8am4xkj+3I3.?@#GH\J4qNf`+q*X}AH-C#`qq4nVkݺyMyPv,/9ynD~2,l FkIsR"*蔏 tVe}nw u\e2Ɯru% $[2 8Rs*&h;KJr8u瞭|ua,=YnFSd݉Piex XԵNxE;;Z(Qlub{*pء6ёy :2nj +-a?Q#CE7h([Ər"0x8A%^y[g'c2JzQNg_Gz\Ҁʚ )0MYb"Ι6`JZkPU|h=bפ'(\֎+/=N;-x4[ umzVcо TH ͊H<)NK S, KsOm1 $[c0Ϝ`&N0)h;n3}I?k+#0Nfl6y118Lr I㬍AO C;PtwS {P ٣@_pgMf-41h,BIJr :NV8>N9~gH ۭquAWl͔iMl=m12=g)hdT \V - $TLQO;gx9^"TϢ4KS!.ph<ns7] CYQFUFЎj[olG2 mBRj|)&fސȆheط6$,;e]DssvV;g]ڿ5\k|mBdq b76R$i'AO P3*lFEU vɥIр0Cj.(zɑ:R>8 B mQ)(,I` obQpd&e^QaWg(o*Ds/N@mrvt9b  t \XۻuϾAl(dҚG%YBG#J' 5 sUG %Ϳh> D3.1*'C7'?-zji;+kBa+H])gw7QaӷerdѤ RcAWq:E( @pN b4J6ҧDQqf(l\et ؇#9S2&$NrG#ǂ$A!R)lB*9trXXJ2BN c! XxrO{ŧws.6ɇM̽{ j;Ooǣ Gu-GU*dMR$Y4:`Fixl#nx8 س,&W#Z|A'D%צ0bc0b$ŤXԦeQ+ Av(< ѠE8HoܡF )IF**)hCQ<5*NFR2jTyԈ5!Xd+TBcQ;1W^DZ #"q@Ľ1 H"9/ I4 pPꅳ pE$hYPBhG)84 Kg\.("7y4ޣ%̈́VSsaˆXÈx{q|?~NgŤX\q8>7L !h#%B3 FݤІ҄O炇Ť\P'AQ@fl*?lm50jR ޫmEJu wc&(?"9bФF"_`i._5! =.%tWZn<[աg";Z$?̙#a!z3˯ߋ~H}{񏻛wF8^`?~3FLRV & kOS~4*;]h~5Yd1*qVD^FZZA^ K\c5ISev!kQai!Oc:<˽6Lv.bJ18J=7Z+DDI|2$%/~Tؕl $IrtH$BzFhQzr5ӝtc*ЦF [TQg-7ROZ}@ ji.j6 (b^:HGu::~~]?v[84b5= TqR-]&oqGAnPy5˹dz;iZVߑg=-(FѮwZAҽ|!-퀊6ZVf;hZI )k\%mzۓb]u!WZ j Ibh5bP[#(^ʟ\re`y4 (qRJcr ScMQ$iRmdT]#Gjiq u %$Ap$$R<绗- \$;). .-sʄƘR`\XP X^2-՚1E<);S+xtH^:}ΛHۦXzve,K%fIqq\߷֡6 P_ꪀ)S9sGHj']ْe9TJ:_^)Jh &z.}R+e <aF f(#P%9<)}y.й'=JbMߗ_f~\.a[4gR'P6*mJ/7ht_g6y\l]bƎ좲!d˟?D&+}뻼Suk[]p$~J>zN|9=xsKr)U̫<+Nz#r.?qgif9q_㿧tF%hyndlq:e}A?MFP(3l u=ZԱd~6{ӕ' OR[QhUcV@u{b&[-Ŭ&ϗbb%=w{:Zm 'NC5?3Qc89ΙF]57._k+t.NDl;nDB F7$$(b'-~yLcy[? ]p I.Fc+ PbbpHFF.` (k{&,MrN5?^U=O7RЯlYЏZzGe*=V9R>O?Oܴ&T\;Mո'&0^?_֋iIxϩ "y} (Hu$0)G`[-ݪ?[=e#=@+3d CM)LVwKȲ=P%XCh"+Q bZ@\}#W&m(j$с"A|n$Ɇ:LZWiIRNE/\@6dG1׏'\:cf=:y$rrsȽ98VAˍjX2R eWޟˠ mX12DPT]\B5pF(D_L9f g;39@ * Sɪ!WR-XKPSS'53ggUQq/3 =&MSG;VB?\%mR3w{2{ӓ˫c=YJ YR=X7YOI' Zoq7cAC 8pmBYE\bSngLi&~-gebkӎCͪ[`ْ*.CEh>`%edaAY2rB/jr̹ zEeȊ -*PRM:A(}#Ii^1jW~\<08#-qa4-cgc:. ( >Pd W`h;p) ڛP)ge#h1dq8=U_l1Ssq_/.~!'$*1F9R1 6[$SY9dT~q.pvJ\؞}{WshẌ́Wi׽'-$^iC6L]L*'hjɄP'b'ZRW;Hwgfx|ڀ,ewgf0ڑ{PTN]XճRUl| dJk&U8# }|Rm:g ` YY'%PBγ %mM |o:mrо׭]\ZL{[gJi J@(W}sHzHH&-&0>IP|_Y{a$YEz QbE R|1+Z0N𤷲٢`*!Ũu&dSs3wwp8Y% ӉLt#G15B0h,V#]>qnl&ܚskQ>]ԩ7f=zTdvG;܊BPZAՀgH w {h7`lVoiȡ[.Ěmzȴ:do uW@gldI6!-??瀔dr~*ޕI_j%뿺]'q6WWV{suo͝8rr N?Zd{ iLgG/\Dkb dPՉ4B)-Uz;ؿ峣TbWlo5{Kwy_VyM=b':^^ͬߋ_ܱq+U| UfgLh)*>ew!~8|K'IquP(ϣm)~8?;b{&թKEȥJ]jaX?}~G_?]t߸8*%h ;\ɿ|''' ;λ؈qb%ĿoGό_þLϦL?7 ?|夃ɕOWN{^/tlVkJ>U}^'cKōxʗA.V}:zx>_{v߹Ub.zG=]z;. zsejF൶>~؇nw#; nyA}8?`ͧ~]ctQ=2oz=_ϼU?Qo}l8ߎymY&a9WX |hcKvx4pE;Qxlr.?C‚)dR3%JSo[%gY‚t |ưY;\AT)fzA]O\TG7M%eV+RЬ8%qO B_$ SO?0h| pAqNҹ5 ;r{i> n;ams]>a,x*{zC9'@lm:ߩs&\5.ʾ, -ԑ:SHmZoZ+ =XR)p%V⥺HFlr)k3gsX.1=S^(6XKڔ`6LHM}Eo2'vp;Bg~j<|H7Nrwxiwۿ\;eAU-di&#xNR /њ֒BUICXB*15z>jVögRn[ y%~9Rp:sO&M˗#XLc" TL։Czvd$3 ,B T!XqfSs}#5ɊXEJ{Ӽ^аDV,QnN$귋!h10qcD-='B =r@&S _y/GtUoc1v<ļ{C=nLA"*68fLN:@hl@Zw._.,(#.FWXSWVBYc z}=:B1.746H4[LϏ}i)DnOVx:ſr+ jYc.%E 4 Zg%SBՆ@4y JUeaJ :tqZ93fÁoY}Y^vOmrݲQeᮯ] s/<ĿـوƁMLӄMdyj.DR䅩b*96:5AE"jgW1Tu7A3JDk(y:z 瑜ƛΩōe ;Xno5-ӳSC^-̩a#R+&@9'_o)Kb=75wMo b.+h#)=bR}fFM" [GqP}v=,Xk_a]Uu.<" xW2duTϛ{ưB(ED^UY^ 5&kU#!U룒*ƶF`٥eG!Y$ tU?TdOdYE ;,@mC,+8;uqji=:4g8 F܇ pm76E?0ĚZ2$% ./Qj%QN,q;{cGScZ4paϺ=߸$Y\[pEf4I"N|?ָ$ q~=4vWjKkH_GxU(b uq*_g[{:X12`)"KA)cFpj{_;1(Nrrm(.:mqxYH ˥VwWf_aBIVGij8}/LQC9|-V$Y=6y^FJ)KS^[.hrUC\`gC`8-$ P^9L4hnO5vS\yW_xu6>|z @ygN͗n{fn킃Zٸ<>1lr{:`iIc%Wt6X> Pk8'$u:ѣyqUou:ud卒c=Le$5a?tg#_H=Ez(0f~,/g_^UMwRiQ\s]:6/+QxZ&X]Sk*` [5 ~o޽M_?;z&_}| ]x/`¯'z훖iT[4qLN55c]MGS_b3 ӹKlHĝ$ |v\Q c0 : .2C0U.e-=laK3ncBzfm8En,FA$цbb4v{d5HXKu٪㭭o&ѝg_&ui)gʩ;O1?L2=yGAG)K/;bԁyd<%C1d zZԥ(er@wҦPؽ8I]EQQ?~h#/gÿ_z! ~nbቻ/vrr_ԓbpY\15w^}]𚊙E99{p^s<=@jNamxӖnVfa@Q%8>aj9zu%=ظa$L.kq< V SB8 :hwpѩrۿj5CoMj)DUy<1Ƴ^Q~agFQ)ۯO"F묜8O4zTm<âuqrt鉄"VGL:f NbBBNɵ`"rAME9;W;NⱃIů.D.aͤB;B5U T#>dh?|*Fc0~,F'gzդ^,DA\䓦]+MIX}S{,9rϬ\ _t4S +c{߁I l:Ovs70mZSILt3dUR }J'p²k{,ɅwN Yp*\JޫS/DAPBhLFZm/eɹ#)ځ >J1@q5"*y , b2D5CQy ؼR RYU4.E@DIEY`iLKy rS3XpMj ARwYR]#ǥ^*Bo2E26 MX>^l2lm⇫H9DDB8y*k+⮛s5 p\0s6ʹ;kH)EԒ)ؗ\^,$VKɔUQ ʴ<@V;ĬW逰  7sտ8SW:o>~$M#/z4lA-EP0oaĦtV!UA*:a Ru0j`yu,p'ᜓya:ΰ̤uSԫDTqKKU,G}ܚʢ<^Dƕ*9)O/p^ Tcir7;;Ώ;`ocRPp DDƫ@ (!" 2Թ\SF᝶!:+649&8gLi3Mp 4&8gLܞ=KٳL 3Mp 4&8gi3mI 4ygLi3Mp 4&8gLi3Mp 4&8gLi3Mp 49uSwBfLi3Mp 4&8gLi3Mp 4&8gj 44$LieLDPi3Mp 4&8gL!ՆJ"5Aq%_Lm};S%$t&܄U؈'#x<ٜ>J~ф"y{~,F6qMMbTHnr-ޞj%S4 dr埽n5^-\>VfC&~44Xw4tU7Mouwؾͺ:ϻ~6#x )mU6A/+t:QhNj"JIFv^caۖ%ݶ|$D3mP% S Ry( l"Y(`޸I-`kY"A"|A`H(<\rGkϢ =X9b[ Wb"{&CD{kD} )qJ;9 Q ƑÕuf/akj 4!|UgʹW?ͧ<"_' XjȤ`4KXdQc4R^*C@SgmEx'F6=b14in-7D^2hɫp0Xoqnݾ酤h/6H1!21Q9S*B( q* *MU*}O gxJlF N[7N{猦[ ?S`R1ڤ"Vi#"( -J)#RHDc܌AC46E9ZL.khKe>6b||nlmLůy|dD3\LIbW9 @"ZtiB@HJ@)PJe8VR0pZU 6[@PT3k"eRҔ'% a_eFԡFdtQR"D H| Oܝ6>n:XfpCpiZR}'" (5{#1Pm"`ȋhKA2F>r%` HC;^٪SΝy?᭸ޱ?w?NaKE{NZR\KIF"e`z sZD)Z2li%%&8dsH|`*I*$CE]"Svc$8.*@+1e-E0nHNvHܐJ@C9>yvfL7 3y"b 28פ ^_j: 6>(6;˜8r!;|?|L7ӽ.]N^oN.ni= R=5d☞>=el#?W!Ƽ&ȒaJM#I"͓DbX[9?wK#GH^dK@74bYιR&J5wɠRZG#Rj`=tpVХ:_Isϭ OWFD '2윖!jJ=>2%5y4^r*% %9"U#Nc3Z`(3*TҀC/*GT KI`@C*muf ^9㸎svWN&aMQٲ|" M޺ -\pc ʖRQjlqi堼 .̷Qra031o8R pbGU AgN54U$Fda4! i\)d{w**<˂ExO" xo`XP'VTsBhp­RL=6 KmBK !1J9qhY@Z]$NaH9*h !Gb`Գ cTHFS;!%kwtٸ $oրgR!C ZgDP8,*f Ki%eNS@&\ܵ tk Rb u9EJnΝ٨7Rfn3'qd9(X*I ,稄ņpTbS4R{;wN(%q( ̐RE0dlEGa|65m&ݳ.޺.!bF]L!a+\:JΏmX,Q`v1K^qxf,xc:GujJq$BSrcuw[0d@lv\b*q-ZQ_E!TӢK4kjzbmiKI W"e seهF%)B!˶*R:q'fg?R8m:qSڸ +1iC|6?Q『fW2-Qy*O&Hy|+$!Ǜ *ʗJxOIv@IUMzeDcoXy%/qN/>= }yB_G:f~TWt7"8-Le³@&,/D}Rj+fQ5& p̻,rww']ۍF?I?%6\M>]yA M{o 2'`d9'!S`\PЃE8E(0 CyI/C`#a22Ƀdmak&ciI&pL\qBCO*=&>lJɇB#UlMfѣ#,n:2}bãJRB\8ؒ -d9,=).I8:gz}`zM`y>7#0Ƃ֐yIY*H!gaB)s0 Tă†p\ZWLմԪҷzZtl_Q80E=<-hG{]EidmYS4%,ʅ60.aY喇:T?NK}cb^7qP^@uT+مB]+MoWWm^0~' U)7GF"]6oyG]/6V6s -w3r1GCer~Y.o>1tK \֕p]IyuXtGv$G$?\ oi#fŝ \&.WWLyvH~'Όk7a:#qO(PG{4W>n۸Řv2hpDvwppLG.wSaIPah1,ɸɲ"h-FoXfeC~ʻX`l.&v֒(FK~qÖGNݠ˚_8q3\9lӧgװVFJrDV<{Ncb̊`]AkX6^fθ!T1UUD.'0rfR(GR`-j+AHd +Kaʀ ѫb|ޜTQjEZQuz˩ OV=~Fh\u =?:?kbԂ+YV79o|9M/5:@=HDEy8LHKD02ɬq5uP±N 6 CϹ.V\JnecJK>=r)/{M#w;kEvZy|p>:seޛpexI!C{1Lz& 8Lႏ1I1|ku*j̔ګsM1$-br"'%1X+*zb⧵ٯh 벟ڦwWNK&'zXK=0{0˔,_Vɛv'ˏ'$nZzxrйU޿QAɱA0x-=0$Tșօ= g(P}&GgR6f8KtGB/K,IQs%]&#U2VcgeUZq-V` Iʁ-ȸ#9&o:0 '>qgnsF@P@F٤&rłw2DID@)6C$B%VXȒ8{.  l6.}&o,dN4y™c+[j~:k.Xjqu[R,rKBI!IJd#*8k xڸe&Z>R;C'GBjU\LȐK$C&&HMl}|̊098WVcg=leE1bǡQTbEy%AfFch %!"(MR/<* 'XD4EDFoY^ҪQ&7n ȓҀ3\E~"ѫdgSuVCla]%VHQx 0F>R6$irAɇ ..=&ba&lϡAv |8qF&;H(P=t55vGK&5kzlPF!3 mw'8)oG?y=yETGANU(r .b Lɔj暣q0X ΖrCLe e|] -#%O!j[OˆTWKW_v0;=n‰[֗q&;b К=Uuj$[5k?9BˉvN,+BqLL([e-QFub"exRBjŠ4zAΤA@@R2R##L&d˒ 9*yfru[8B#$ȉ[Y (,@ lS2VkL#TֳjԳV~?%5';"GH[ 1簤2 -3  #Z$Rdywv *PUޝ?pPO,e98!).0(\"1r#ISTd 5@jP+\~'(1$IdMZZI^L'Z 28b*lRM&>~p#muzp/OMHC&45ϭx&Eʁ0Z`2L#MhҎ&79&DՋoש)Ej"rjקĠJŸ}[9r)Q1 f"U$q6Q xcN4s5 9T;De_bvV.qhHY(>Lvig&J $J2!aH6yd #hU!1!qD0Jƃ͍HOjj}q5sN?, (5~M- s&&j* ` z`:' FjJF#׍!m *@h輝bst? ,-%v:/{X lF`.C<͈Vfe~n!kJv> wS>Ŕ5ZZ쿈qt~C\ 6K.Z&{1%|PVg=( xIȆP۷) iino"Vkpz:?*U!S~RxVO"ǟ:Er;;ՓtwLdmޔp\o2ƒM_ch$n+H׵s67uwm–Ft-rC{ {8G;BH+\>rhPUw=Rwϧ0guګ 'KFZL#ܜcQej)ET&ۑBn*Vk2>@LƘ:Yε$aj;92299n*>jbWgoY_0<|{\v +F,KeAoMպЪq3z.il.*XP9V7Q6o<[ '^kڼdkZM&_6Z%+rNѣ! R(59ӊeu{ Mi],+M60B!k s! /6LfQr-qv(tB\KXl𖞽fE3j|gxV/XT .?/e;}Ep>oD}L Ŝꏜ )R"nsFFaYƓA(1`(ǶE;j"qBmضhm>mL&zB, :KX[^][~B&q)M1A~ xTը|sLmI^:b%HCq&8 Vf LrXb3"hĨU*[W￲|<ĺG덌.TҰ([)4JsqTȍuHf0d{iP*u]'zf_̾91id%*D dJdHY]?ZGcw=cty[cj 뛹;~2z87bG 2Z҆}\ͮϧ'B@<+$!Ǜĉ*A`& %<٧$YW]tbĺ:r\` )52MYJS̙FQq̩St}tTlxMz/ӵldY7J_3=V 3[qj5ϋͣ_yƒ4w ~NCh{&yJ)T7jaX)*7"=dls޽~>jl1o_ZM,Γ~tBY_/Mub??4|d6iݕj0+MFԫrӃ/_]HDhv H2- ©d8': ͗LרbTb4̫ۧ,Jl7{Sdm}^?nuSxjt|s9]V!Vê/v=w\Q*3K*W`,+b' .iY K9O Q(tciE|2"v{ i@,;^}Z?kIzYz"}" ۤ:]^uo&OLUwq|u,y6kL;P]l@)Sk .;Y'2!@y I[i}w8_ԝfbFըϟq芨7W\]:_w%_=?nP=M|D_>R6w*u?˩;'gϽkͯ*y겼MtǞ] ^F$xOyn~Ӝ~nx2݄ha?ٟ<t?pfצ^7}Jts}^tsZj'_H[Cw: y_/hʤOȧ+?H$>!B}"V'[jզ@*lu*`y|hW7+]cudNyKsDoyXXjǘ9ޝn&=@G)̧Λ8]>vOۜ9q?G,&j(ҙyBlvϝ~{"D W??Ŗg/<!J$qDF@Gf8gt&Ptаl<̾8myb*.*;  }U! *r}s%ׅaB3iso#)0h5B4"pJ@Pd'%7La*Wf`I^67^^z E1Jyc;nTP*UBcQu,i _zFϺ n֚]U?j`j nrs#d@5150RP N"}}~M31-CIk.AI) )dcvٰפ;VӥGnS18-H_Gk29rYBW&3Y>%:gOeٛ6,Sǀg:.R{1 <28cH!qc&䍭 )64 ېTFI?+^~ɉCOJbT*z٭ jDKӲ째MW7bV'Zz`qb:%v+u!wn#j3OZUQyFMhNUd+ 4f *̉xt!fO?d*2j495<+55b̒ ,Q9d.EIQs.HFjFz\֚bdЎXxHuan`vu };qlZ初 0**N%I)&rłw"/2: Pe$B%VDC1\Lz/() m4l;`Kp,ٍ~1LSAjPuQ-k9Ҥ4edyiUpFz82K9?Iĭ !VC!ℊY 9d251E4G(q,2q2Vg7F&ʢ bq("ʈ(FDٲ,X}mA,Z.#!:!Z4zN2ˣByR_*"Zid'۬ /C3>H߸ C KZpEBeD&nD|U{&~|fɡqQ8s3ƒ&rHЄ7N@p;&Kܶ7RlǜdCqcaq*x lGRTծ;b=vamApcE?5\W~|A) FiB` 6{8.;#& qQlGTEE%Pc e|$eK CGD^ :&Xe6&Ξhzq堤{ $Җ)o7wK{hw8wH+1`gULJcȐ!*Kp2h(9cAfW7FyDq#͔1IqAD2H_3Y*1bD g[M*HҦʒ#,{SʴxXYRN #iB;[ciU"3٦`Fe9ZvnN"4 QH"蔵1AQA+90Ј.(&J$5qÑUtݐ ޴5zP r.Ї|r_&#?so0{/#^Đj"VƬUL o.IS?9=|,Q"&aMɞVKVl1'+B`L0rl&YYY3 }ufֹA\\΁z? VcGSP 熩Ɵw0q0uySWLWW@LW_k WIcҡ %jku2WŶDݡ:aȡ=E("gIr[FFLLf#FRvO%3uPFk!x Q9^D][Y k/' m״6i'!R,(KK/7.\{@)[z(~RE2Zp::Zx-BexG.3cşryօ|CinMf 㿞wzv"01'-יA![Á{lѱP 'nD H-Z-Cdi3٤KnV"0me=km8;YW)*}|(gHS 1kla-3ƣNF# PRY^Yrμru*Ъ \?P`_X ?sq$hdA(%DR)"VkP+LI˙($VN ;t6JJ ^;J|v AբmM&# v: )s#8 A:lYF,+;:)GӶ""4QKszSV593CTK,AĘ#J$10&F` +BBQ*eVzM-T$d_퀖 h9|.KN?HyBBl2& iAoyY'o)%GlֺfZjyf r~~)){7t<_|JJ4[yRk,"9IՑ-,:o,"ԭNpF ^ S2Ih:=3\hKx8+`,B1Ǎ'khOC- $ˤvpSp:AʝPflUsV޻yU?h](TunqߪzP{?r濿ƅ.z]qR/תSxy=HGoZTiev^Rԩ84k2ѺAHpi3>~"lDU _\ ?|iB4 t|.Y3ҞV h>/["U9}@Rq8憌[2%;> c7 x<xq}6 ΐTr^%SXvMY3-?RvS-?L})!fF:] Iw5h>m$F"/FC=w6)y?3% DD=]s ۉ6jɃu{LӚ ]p%Ofmѻ; <3nzj,n3qxU64 tsn˼'Im'riyQǚF"qO>lb>i*xkb&b "dy7Ou[%FnCl{iOg^]cc}JuTݜAo\215UUmwka0l( xv5yq3%f+Jpnh r\[8> `j2:%e) Aʟ R<ף?-@@ٵa"9=/9A@)?K"ۙ]Zڧ'l,U6!^j".QdmW,G 2J]FEFb:eyVbB`9%Aj!t2#`)wZgim8cUdGC D- E7pr#2;R X;u]|Lcl PY(5|tR#} $썦Ք<"i/XQ)o+;Co"f^/2AKV?gŮx:km߼# uMle3OT˛6TkSjjSV;-. ܕ<N~q@ 832T޺hM`,OVHl Y#F˂HvPJ'Z"RzM͂,B`{n1!~@+0[vH^ЇAT()5Ks#/}&3}"e_|!)Qo^Wpf)e $ rP_ot 4@fJT@"Pmj$7Z xн?cQ(~xEPs 2 i 0"aQȡLکCoPCLAJ1BY$rqE*s &n"S̶ @[Ύٰ#B}9~^~3'3EDrm@b;OS}+>|; ,C\>{bQeC9S1YEΔ }NuͲ8ʼ8݇U/Rmgiw}!U=hw-{Ь(]-2cuŝsq>p zzgjvj`v%k\rafOp(xd-q s 8kA6y9;+֭ieBqq oHΊQ{ <@6NKB`NR i#Kv`ɠ  Qoŗr!Ylx,h$.'CًYpJV3#YHID6ؐؼC<'_>lΒtl%GU UM  ѡlqhhV`.IؼI e-ܧmҠ7tMu''XHr$CFɺ J0)^N_D;NCD˕ʙh 6%zt9`=wDYZ0⹹'+5(Z%zmWmfonMKŹ](~EkyST%~O{]]M//5oeɏ^MgoYgkCv0 {gӹE{;KRIy~sk؆,7{r}OeayK`(2e%=wћF-XG봲ioGJRa;c !OI}ߨnv7)Ҋm XR_FU>v~q\N/'{zG>zӝ?Fo6*x΄Kw=:ipXC_xKvBW2n>{wà:X̡:--# B|ufUNд1{.h-RڡS֧*U+ [tJ@Z6ud&t/Q*ACmj((Sѝ Epi7;KPAw/uFt+mebE ]V vO J "F_R]f{FӘy]իf !ѿ&<6*-%VS er>! ESTh#ǻ=sNƅ །w9XvӋ~)tǬ䤡bLQE!i$% !ĄmB0|~}|7.*}ttW2MqqmTUFB>L: !'qcGe&5%t_8gP4D.(2ӪAU &deZP16%$DGe YLZ[]x $nGfp6یEU'US Y_"|ΊfG6T' O7VEdm.]nż ȓd}E M2fbM=A!%xB>h AJ._>1&!FuNkSQ9l7 PJN%l-ѧ@hg:JXvp 䁀:t kR([|Pcm!I5MH#{Nw8=/ ((MȚ\\=vǍEQpILMF"Fiv%&HTT@PqkU[E¤2,,Hc;H^e#@ A16t)j^ ab;,I£j4ZI:sBĒHxnQ ZMU2Q ^Qc!-AmU(a}yZR0 z0dkRqc=A;w;]Gjy|xLKn\GIjD0ui3V$FOB=V`&n>v%Llt 5EZSRP'ՓFCoׄ bL L9@  '` ae+J ̀| =ʤ蹐fhJn)55=4vo6vInRM5r7DL0r( ;ff5ԤU|IF i՞){A `L}$T1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $@L1 $wKYDO$ Fb_H 6N( @# r(D&bI &bI &bI &bI &bI &bI &bI &bI &bI &bI &bI &~I Ȇ ҈aH ސ@Z"+e{$,$fI &bI &bI &bI &bI &bI &bI &bI &bI &bI &bI &bIЧ{E`qH c\I X"@# C0I &bI &bI &bI &bI &bI &bI &bI &bI &bI &bI &bI &˥nRz-5V/@KkuZw?+'C NRۘ{|SGt$GÂZO%^|yzR;b] ~oD̡@͡2\ꌰl}y.z!mbGZ-eLO~^28bc[9谨tf *>`#F,5'y9}.զ[ʚۑZ (4j凣SkU+efO"nFfh I6¢ FeQ6>fHJ=?of姒YK~{O4gs@3jao4g0"3YvݟjA AG}ݦ1V&>2XKUޮ5%u>-5-E K^7Qށ**ihnwe,Xy NkD;<ّzmDoNrɶ8>~ȞmІC%5;?.v;?'Z澕}t|{S[Wav;oovUPOYWwۄំ潝h~reI%U#6ss=5#Nݗ$m@tPԠ=ǰ7[cvZ!qYoZ^ʎh]뗖Xl;#..zX o_X+ltr|`7wg_̑\LܴJO\L{3t Z:YmIZBTۥ3!'a,UMKVT]31h(IN:A7' 2,;,]"jYrk#johpt ?m# b~ÎˮcfelD]DX @њn &@`ARKE UՙNVge,LKqj%ȓƲtGg2}x/p _x}Oo:1"uQ :R1j[%1Ef"[U@#Btՠ ސ-V4ve`q5u?NAw.o/pϷ?w%f|R>eXn]mRmE==.n \ _l>@ ϣ`cN*l.fns<]E kbz]iw#i{NVF|!ƹ,ݹ/awnú`wHY9tbRLZa9$L<6oDs\!3P81_U@~u{tlr4^'wwll5:{(;@GEec"q"O^:%"1`\QFW[1`\\\j:qn. [uXt}&K+~hCx[?hmf`߯JKZʝJCK1Dp{QJ*~$ݮ#J|<:$׻C< UrpU&0E56re%]Dѡ*{V\UTCETML1 'qS}m_es3=m\ӏ}T;;xKH X-/(m| g}tٗu}ۤt@ BJ'6આi`lԙp)ڻ83#˥'cGDB%B=װ,0K5rr;o^N~TwÎ'k'yosAew젶-G9ʀ6] ״gh齳A6Dzpiwl^ <~{ [' p]#Lf״ %'+lGk -7o$؇}/2gaTvP'QFQHm}-ٖbk҃9XwC;IbrGlNTemTP=4Nfxa;$GT:gP'ܹwiҾQ+KP)9.L$"s :WDk{r&۠ 4KǛպlzr8ӊ.;1;uyu3ET;YA1ZPSLR0NNtTn(D!ِcFD˽0gm _]axxd,f*ƄҒ{# ܪ"MYY(,LelS R!F ZITj48"n\ DŽ#D&. ~O luv`!k6?6t@r1Ͱʁ08dssQDFzP`J* ޣQ%dD򵁒LYe8/o!CG @dhP/"E=i/f`^r!YY97\2r6s͆[2<̽ Р_?7M5#&49cZh7SbZw٧MӧG}9ڡ.?GݺnnIg&y`VR*~cqu߳ lh 1`U~n.vU;QCzz{ iuDmu{rFlww$׎qս;|i =w+Ów< /rk ZرIɯݯCer~cwj PM3(oYwNKWj,Փ?Hx7Wjnc%1a?/Pwov}vAy>ku85Ŧep</jҦ|xQ|yRw8CjM zm˩m7O|$mtN<0A}꽩D3l3޾GZe§gZ]Xwoܒ5ǭc[{lek4CIJɈ2y=($gYMlHEHqKʛyJV yS>sI_(y]h,;oV$W$%H-V4^# L}sVyUv(Z^wV=VmJ́ǓTcϔ{y] /_t5!w9¬u'f>+\GΊrQ&XDy˘́!$2gQTxYȱ bΆh9E&i)E`!t*9( ,sWB.PQqs 1zp .Ke:E튈9}N{]imx)^2 01b-S j) ,Bdb Z_PA1KM-8?kN?e"*X!tNqu(c7kP jHKaASB[ӹxX/WR䪨_TBst2CziFETsaQeҩt&rCJ.QgR ْq(ٳq.J8(rJQB6(`\Pj#c5q#c=R kiƑX(Ye,#ۂs`pb.,YWꈸ\Ǐ^g5-9Ue\#.;\"*1E(hDmwC&򈋧sjq.x@؞CQAvܻE06 13E?*`3%;h8UϲB$ M7^ez>x?4>}|J`9s;^XJRqwN $NTQO܂ RmH٤328tq:|RX2Kq*!eQDW8t N! ~Q%0BfSjJgCyaL J>.ה4Q[_?/fZ'!$NdhQttP %dҋ`3Vňh58MQBP,Rs>$hh,Gմ.:I!rW9)D:d $^j[`%H+0qTdNQe- +gҤ$v3&i/+Y5q֮vNfΧ(*el|ϊRR\lBˀˌ.J J ]Gy5BFXM |<'5PӟYeR=(AsO%e{t\ǘe2-E_Z%d]#ժFj6I1>F JEm:Z@PrC#Ra8O8bY%6/md' {πԄH,ƌUȔ(I)>ϊ 8`N.&Dv' e}6gȹ!)5c)*:)e.+D5p.gt(:&<:Fh2vViȱ!*[7Ao0&Md#Q6K#QZ"Aq(뀄13(qT[]e[KuՍ"7Qj̩ڎ};Z0:R9,d[w&&oٽu\' \HL0")0zyN\/$RQh">1 "U,"p8=:g%dPF؈Gm9hv 7'6[&nMi)SpYRw-i@Q fh-NO"/no/NCB7(Mcw[N+u]E{9_%/_%Ӓ\]'/PN-H2qt7\҇]={KK!QO׷]{ aE,BBԺ`MAd] ^ZV觃b{^QR7 M OrmyRՃaW0٬V+P9MFcO= !YѺ$]sȭ_dCsOO*]ٳ?{WƱ X@ Xx?AS"L ~IP1l5Uտ:*=WO7X"i=W^qvm{M [^xr;= -w.4h~BK9mn΁-z iFGw5UmCkOdq/'1WuګôgIInV*^. r'Y#&7c'$G&yreSv 94{~sk,͛LPbsBGIR Q֚#,:qeQs3Ek͋MRU,z̝vX-"rot a[0$> ^t^'y*(W=)A õp{;XIWV(x[]էW^^0q0uqSL W@LW j4n֌׆R`n1 R3Zcaٜ{k4kS46B:̎5o"5A^Z,ؑKl^ڣN4`AM2U + (WKM)pZ* (85kξpv,4t}G^Rf P80Od!"C8pcQ)F (PFs?샹a+h.7>+6<4L^ֹbEwD@$TD2'S(SQZI%pͪ (O x>Q K$tCFԭ<a;.fͬ;?:P1LNωwbZg yR:bm[lnYtWڟjkڅBĈe$&Y*Gs JfQ &,nbGfËe#u\)dUAD

Nm?ָO qᇳd{eVoj6WŸ7RFie =ٽMzz4W!%K~y$ú|~]jz2d >+SF)7ZIYJϳ_'_ݑK%Ur-8粤d6| ֨?.udu&I4*LUbng;&9?۟^_ۋ?}D]7޾88خ:ۣ#p_{75WwMۡktc7j9p׾! 0~@jpb{`IbDq=IQ(1(BJy-Ƙȴ( J&|vFy\d(VOzjC-\0amt8HY9ND4h`# ybb4N%ԑD#a-խ_Q%Ej&Hxw^|I5 y0;s3K>mtyGAG)sϙ.;bԁy`|<{AX9o"+aa?[ F,3RRY0b "j鑳Xc @5TJ(s(joFI*N/TjtCчrZ+n'app@IلXY 6p$t('KNjcCt3&6;}~̦lF25,' t!m Y۫t8,/f&~4cS?Fp0\ή~sg^ E8)M9?-xJ['?LN8I#99NkspY[ Y>=?jiM O>۹3װ`6oO7yk̍A7ȊA<%t뤥Erf`2`eǫ0Me,b)Yo+tB(]>eOS@%a&y pI5d~_aqJόkҭuz\3:{|K:k3NPB/Es}%]H bT-9 E9#*:'꟠"rAr*AܩP&?pO܅K#_]\>n$B;B=5U) ݔû_>ۯIрFKP&jg ^+x^Wǒ:57#hH|y:Y (8lj +{߁I9 tHЧ-q} >I:FrDe\7%ЏQ:V˦Mta÷N Yp*\ ޫS /DABhLFִ\}h<`>(HoRĵOT< b2TEAok`Kq$\@KeVѸWD8p ^iLKНy|'9f@R.H"DDq餗 %[iZ;"*L=^ mmH#1Lw$/weKueZ7eXJ{0A/6N^r# $g\Kr836Mha_nBPwzfxfvW٤(7)9eSozm7O>U엃 +D;ĬW逰 OjRz\TknF^TsܻocZ)4Rې:.Qv{2[toAP-ul۰ȥ^}n]|*bZ({r6M#7L ܨ+@٬P#o{fõA-EmSk*}.[i R1;Zz)k7ÿ @9 sY04\@IܷP1Pj!^Ad;$ e`Rg^;" i'KN8B{fVJ̹6f`b"{&CD{Mtf[R%㔤.&"6s$# Yö*$%Z$=JTlȬ^iC;&V%Ș U*=|y1G߼)Y3&OJ h[zUJ2QR Ke(xM^2o#;EoĻ0"OgS'G0 !uA=g .i:km<~>Ar"~{;hBYS d+B( j mQcppDp ; "*6^+R?X]-׌sFS-T#g%k}tsL(>6ȹZȰ, Ƹ#aREbD hl#661"H|IBZRkhF^VOPE z^`| !rי26H=SHd\ .# B(*U J &v-= WbKUG@5&R&%MC ByׁBR:GL D H5k$XS=H# * KM-a$EV D ghء5. ~S7!)~uRQMoiug]yuZL~7_׵\MG griuFqjn5էTVaK>&~=+e+Ja#UP8vݨ%H[~q\.*`jMбZʄ2a^!X /JĨ6xΨӼ :^/I^899^Eە .x L*r,u>`HwKW/aN(Esfͭ$GѤLY[5%[ϭ,vx`UZmUH 1o.a3D( Hpd]T(F5jÚ kZBGd{&8G@f+`^}0z,M>MB#Qgȇ2Ϡ5QSZ(,yaLHfz9QW"M`dfLYN^o2,n-چE]Lck润S35+dG b;Aa,4ŒHlkV{H"Ex^dKzC#ƁɈ+eD\s J!u聹k#ow 7Aܟv9-yZ;zf8ëP ;eRϵL A&9KN!e=G2#66i#(cF1ZleFJ5E刊`)  0Ut4]ӮfӲf% őWza\K51,7]r TW߽_d}lzznQ[= ۩dX*j87eLe6 ?{Wƍ ˟L /|ejm\)ڊR'JveeD-'(`W2**緘{ Ow^݅U=!VrlZn|wrvƿ DWqQy5{~y?tltҎI 9wbq6Ƭ}1? 38:tm_JyŭP,gC`/>8m?g1)L*62N)1`bGR1'd=%8nbO,?'Zݮ7{뱃[P5rdDi7\Ոc9 I:E J)0(l+X<&oh68QtM虐KCV0;β Y;#3i1s &F(k/*Ů yZf8r{t[k}Ȕ)dC l~kwJ(Skv}lV*Eʹ~pw+ubDgw?Z6E=}ܰ> īnwtE|/pl cg><]?tq;\Rr|xMx.bͲor5Z\%y<_~//+a|u'">M />*d>P5c~*Vbgf^~}Di|كSNpƮƮܾC;6"zH_[gYdlVmR.g=3~N޴8=7]'8SRStSy/~Jֶt"4L\S]M+WGopo}r<;fˋYވAu/+ro_w ͷf#t5UOWnaC#`?Ttk4FP鯥8Aћ4HuDTq$;0}Y~\򊔆wih]2/<#3{F4-}^os;] LjꀆRtdޥ{jx_eI)c]b+1q˾J\hx l8u-"SfH+g9ygwvYݨ-=4I$NE糌&:+ o},P%-d:O&q1_C6Yx@~%)\1t%8LVECH @Z-8^Ы˛`>F]Ƶж1-^)?~;u@o$ £Ǫg|VTHߎ'M'd4Jڟ*B~t>ޅ^Xvi%uSAg:$)PB#tD<" Rm9T[{ [Z*OIp+1R2Q(/ ytQ].1[AAJWj5K9႓s (!C e%EBIҗ <`lPl8;)3[{_gbZLX+0ѳd*CJ1,m^(rbrVQ3 K k@uqR*ʘ1XCiN}M~6vEolkJ6}Tjed Je6r[SA/ע=s9oT@Kv`'k ,`0HrԱLp>E.Z]("q$Fq.GN;DA%+}aj͆*d쉅CŠvXHIsŁHsa]wC>ٻAOONO_8bj{X#*D%֘lZ1&?t*W;CbMJgZsPCu|RBP*l>%[ط&ÞpѺֵZ g;bQ2Ԝ< Vr]*tȀڑ *h/} YIts|"(ȁehΤOF{Ѡ/:Z ZE9/5ll(1!K>+Ķ ;9KD^.45fB$K"Z͆#_ M+{jH^PGr;^\I1>n_cù."H2jˤ4xQ\BN`l y hV %۲atpNF޻(X3!z!$+nX6]0Dl  SA¿(8֑`6̿&_zֱ 9z7-=P+yx_u{rWh\F'<3(0gԦvYy(:'hK=G]|Wv'٨>sιky6:XG!*cm)Ð|tM Re (fۜaSMLbtgjYe׹7\@ )Ġi c4XB0`]qjR& xTyıDO?pEֈ;^cVt=E,ӵ+ov-buF,yn ݅2Pw86iv{y;(}.xGYZ"fR\l(J'*̑cQ&Q\|*2xIK2e-*~Q2%Hcq&Ҧ|pu>V١ @:4GtSI=ô]#I\vÙIe=O~+ͨ-Fq:靤9urqѐס5P;45uo;󦁕Kƚ˒M%uEzAd%=JPP7K<]هyx jQz=kflz@O{KO~v` gVtL^[c0l z%Luk4! i49 %hy0&ػ0W7O%6UJ<]pf֢)!gcf\dlWs|$SqK}ncc-8++)6&]ኝZFFY?}UJ7i+Qb maL'2@)lo$*ΩHЅbs |牭7¤2lNԂ;ǧZGD-Z0F-H:VêE۬$΀翐.N' mCۓQ昌M e4 AJYkM4ƂDj6_tLThBGuDGp潕ѯwwTJR&ˣQI2_#jGIsin7Dv< %ZD\)aV$3I-[ˑ9Pl{4֋*PdAaab'x&&FUC:=Ac=k6׫?Ip=y>IŘj%Re ` dTE[ȫW~R*5k@N?/)YK$ED D.Q!$ưCcdk2Zt&?鹜M/ei׼rj=l@c3:"ZJ1Z1GkGo&ϟ7~eowԔMl(l#K/8A*Q{P(DI燛P*IEDxlxtQ39s\%J $Is)):դ,ʕ"A:&hWQ9l&!jjk ^.x^nMu/`VoHT:ʚ0Mldm:rH>oڀrXݡV Jg1]ɇa9? "+mXe,yC}ha:~g#XGQCCJV߷zC")JtLöfUUxj!é.©fLqWhAXUxP s'82%DEYaZoֶ֝ ׊^dbp,Ăe@sutG$ oE/x[^2ğPNgPia\ݦ@-p<]8t"=e- wCP)"tap@x<8H@@ Q<9n )%nɍ㎦tGb8VOfl dZ W"/~11 Se `' &].ZSN(]?'paF0!V6{ U"?jW nt^Ttz2MjqQ޿.{ɗPU݄{~T8 x쨾P`WHS5gмpKSë{/=m2[r)ڰ:keH~IbS ȭ?;s9ƟJutN0׼]Wy=QT@A7 7FF4ݚ𛢛y\t7viQ[%0M3iWiO/]`8NVTn OHLD9,(ܢ<:靳ێTndoUn۫wrC'Dm˾r3AUD)G$PՁk :"ʢת85G;#QtRNq,z̝vX-"rotra[0$> ^vԶA/Ql'"GW͍`'Y'0BL}L?%VES(1LxU!ƫ(k/z0I9j Ug!BjiRVhP`"*6^+Z ЙX]-׌A4)=j 4̂> U)P6j<QGEdQ0A[ `Z?"D4jky8>G\Ij̓q|u(l\LS+$ۏth?_+=SHdKlWu$"!\q @@ʕXI@w@JRRGT̚HM$A!<`Q@ GSl-5j`!b(G[],?sp=,:i:yP_ WP$kFbj9F+D[)bg9m+”%K0"5S$1`3!Q"!@#K9 Q Ƶ-E֝xzc>AuDj;bN1)cx%zR\) ~EU7mXAe:3FrAv}ٕc*ŹfmqPm'$5&gũ >Ȥ)JTT-CJ!c9;v$6|Lv&]*I-Gq]Rmkb{Nv6Zʄ2a^!X /yqc%bD|sQ(˕5yFC[tGPϬp^Z,ؑ䀒8(q$ڣN40 A@J@NXG"\JlB Oc*5(K)µkYΆ2ÔSxe&X/ *KF(y5#O@aE@uRA(vu(:yTXPBZX.@Ԧ窻&*dq~vQ(Fo|ɊpSi/m60#I}':9<ʱ\H,N~mQ)h$І1p JU;laLY] r+}D 3e,$xiC5Im|->ꄉF!ѫ-cMy C!]vtsү|b2ߌL4>+({aD>ᇩ?3^ E8T9?xѝ~ˎ>eGͳ9w^xu ,\}.{eqUG\U-?g>B+S3ڋ@_ w)eg'M-S\2k>>>^ɫywClS~脢2)].2tZ(MQaQzkTe'?X|c^V~fܟnͨSarKDӀ'ihLgUZ#>*@Pe ޵ީArVkv2$&!+$_jk~@ "ˉVh7]T|8EJ6NnB%MY.Ujfb!bޚ1˅nJ_. Y=Yy1U&ml zGE>i ZkJk䱤N,]^ v N3FJXn;0=;'*t%O|ԽSS?,cT$U&Vt0\- ٚMbQj#TnK,[6LrwUK_oeѿ)V(Sߗ*>^4 4k 5Z єHLJq ¢/}_z!rFg `hn)1?@@xC[9ma䗵M;NAA"z ^0]XB5xڳ(5E Vb=ۍFPq:h,vCRkeɎ6GV^vm"YM:׬M2OT;@4 UEDj/yht|Dռ<;eOYSu5};lpՄ亾q|%*x a9jaTpFgɣC0&O4)eT.a▱eWiiq7.>EL`&_H\S:l3M0&Vv+,&,r~X6yd<y,![nD:F * I2ì+0:KYM- 56ywu2/|Ws*0`>#.]bw:Ҏv}VFk_nf@K:O]mv7!,nOMD~6}۫dd]^Ekҟ⪧6-wY Y+&aG>B($7Vϳ218C1WkLyOgroc(Q^y[R&Pʆޤ &+ʒZqZ&v8]H$kG$P*gebO/.Z}h_ʕ贶+sMr'YG叇Ƌiv&leViFV]ۮjQ唍 |^…2EB;AsUjEЋRy6ErJVzS51HqsZ[K*Uε34IfԽs:EMZ 6?9 ,nmYiuL6omҤ&DcҀ{m ,gBuI 4cv[\ot.4B%$k0DKDvb"Չ&hB*=6l-LBE+5T:JJU0#&cw!Ut,PO;"J !}t@! xSIx׮ΏSi7YRX'=u@yF9Yr66EHԜI$"hj*uoDU9~bJc9eNJ} yc*ÜDi;kQJ`'ԴIE9a##+$_+:*GαFc1zּXt):Q} F.\>E=uk,R`䦤1EbkճRȮB &Qw_KͲ#oGDf#_f Fi ָh)4"@^XTl`t[ x Ρ]K68U& gA5С`yuUr%, Ƞ-.Zn@k1ѕJAZʁAΛ:2]:Xi65pgBQ"!\`#);9(x3g(Ez@ߑPI(]SQzX]Tj:9 힕uElDzFy}ẀB¿&伖H LRD ࠄjNchV.$1۽P {xWP>|p.4iȠ x V^"n"磊1!D("_ .ìrv}O6.,}tGVz*&{P\!FRUA axLECNZj,J \8gP4D.(2ӪAU &deZ`(icGyO1"A}Y%P Vr<o 7tLt#+PZV1hFyeD;H6VEh´ R{ЋY /ѡB-c`bYwQ׀R"Pc*2;ZlOvYI£j4ZIھ:ݢA(5.-MU2u42YJX6 ئQ}oAK0ACj)h\6֣s[?yۼu˨+àb|bF<?el۝TN Q`I Qi*jJ Xi+G%2b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V}J ~Z>'%f@W\J /Y =*@d%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J VUE"I =%̍(V+"M0{Pz/d7ºWGZ~pNk>8br)_ߟp"fv$!]&m?^2I8*CM]C9seBJ@+|-KK-䵈ik~8|wt~ߖʥsd o/ VG --xMbq :X|,NiCX44i,MciKX44i,MciKX44i,MciKX44i,MciKX44i,MciKX44i߯4k}NT?i!UdmT.M|HyH2@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X JG z?dë~JKM~7Z\?(gOn4W #D?޿"pbESRl_>w⁈YH4츭Β]4,\6kxS߹7j_԰zޜ"prV v5|H6j7 ,m4Csyf8?hYSpL8&Q$&MLԨ񆮈 p/[F랥'Q>v1/}U 2?g՞gu',\55-5C Anv'-c4|o4[tlAf@% †vjn)Nka e0`\N=~(3Fve-L^~m}܂nK ӽ~K@zp֝e64o9Ɔ9i9k'ĆF3^~.(*->>|"tӂ6_5:u/}E%iAuex{rwnki4|D-gO Չrˏj{'{rzms^Jwx\\^ǿ{>ĻrjJߏN.n)'?A:o>}p/Jct\[oG<UA!SjBrJ2 S޽):SD]"u6ϠIFi)TbWPxr=#-(xOkQ[I S D9.>0;{!I) PZ9rƉG`+Sb^+F>穌fF 6Nkviz|{ kOnWAx8tt2>)U:I-ʗ^_>J:1>O%2 #bJ|3TZDr%ڊܮ{:@ KY5F}JrYDHklO<[Boŝۍ#73x Hn1jx{cHuu:P/S\zGh"+д MZyCURL 쎦'71A q_#㍑"BΏ2ڃL'{s;7ʭHg|[{|Zq-wۀ [*e|J@R]FKVТ֮>} yvۘΜ$;%yr=u&|_qR?+52j5H;A;KobgskOnqaI/ixs~`lo?Y B{ml_6-m xٶ4I?ܦ(-Gf;da0HISql%ƌ{k.-|"9]dXx' QuP `Eb:(Ȑ)HJ\8uӞ{QrT&JFebQ9񢬐"R6 7Y[{J^92$ ʭڙh@@ POة%*S@&iX9C׳oxt}Z޽i>z\ݔ))Z)'5J TWei"oחB 9 mf81Mށ޾geOtId͹,rrA#UHiӚ礰[;;9r~9d#?W2\6d Ό(c4XK#J,V0n0} ,rj@6|x`p+''㏗~z-QvkbO+9|'x_bKOz6k42%xh#3hPM7dbh$g0[2dov'ofZ!h{FkR Fk*w;Rbҭ*0uQ-R^_mp>s ? h [vιVƜ &rww{Gu|]TyQC32l9~DYly.rŲ#.ȃ*%6og}1nfgZ]Ȝ6#1*:ǒ R)rQ RHƹ%hޤytx%Yo#$~<6/Q& Z_q]hfFX0/_ {X,eǶ](n<'4-JPUO$ n_ٲzfS|q:BZ~&eAﮫCDJM*!>8rϖTMl坏$X8tv4HTwtsj,7wKn)[U>1gy.]HV9Y;bovN:ug]eNe] 2뭅LNH_r[A? %J0N?r}Fy}Ugl/ՏDm˕,ޒ=BKĎGrIܛ #uYM}mS30#WlO_ PkR*1T`j3xbt@ߘs.'&ĶSW0򺮇)`|a6d8 zmխhnnrm 7g*J,Mu^v@ʛN&OW:9J=}̇%6>0G$:`wI~9.l4H^dI Շ=Kamc@NNA'X:=ѵ穩{٪)e(꒧#:+'u@}<p)Ԫez.v!Ę981Mm :x7bwCmF{leSBJ9,e6}Ǭ2"gqve-([o8C,SB(zi|yl63PWOMӑs)SC~3p+ )B$/$+LgJ^G9֙ILyyPH[r;D_$Wތ Ui4Ϋ}⶗|5DEbkY-~'jn?t{un_ B=DA$'JEY(3>OI[nԏєTݞZ-qV_6/cb 7Zٺo{4m1kz\IZӎD5SK [GE;Pt+vE"*9޵[tVBQ_xXh(p ѽWu Y(5e״0V^+( U*[BŇ8bK Qx(XJ!#'?}ӣyE\i˙$GNwM)p]YI>:L )^uKr!0Ԫ8j ffE8 ~UE-*!d"SMBhs\zX__``:Nqjb.|,hZkL/J׿5Yj MM6Nߎghzhir8U.\(_:Am=A/LEחĘ#C2;w0" s9fc&p0/F2KNd(KxL#ᎊ7U|VȧC}VȐN+%Y00 FqO` K[=wƇ])N%t4aBd"}g3So\~w0)3v#C*[ri0.`%+GSf6H3ý+W[z,GeCt,8f*eneApF1o92bvx'0{]^psћ yQh&B LE)ى x>̈vȦ}Yo1r$sWxRM M bþ`p#V/8l0 L.‡c a^"ZwK^$䥋^:Y-δ&9/QLfUUu%y m2J,g d|Uk*=0k&tII7U_} O;saۼE].U5fS`;-wfTa3(^բF7)M):g7K.s9bOU=#Om:^yc̖~`vG=woYR;$lY, 2\r|ʸm09^>m (^xk(}fA908z6!tJwƚ\>L^^pB oY+?YlKEe1Sp!;6-yz?Pg2!G5yV0rp u;Lf+ ՗JF3Iexr>+v>*i%bG6MgJi.RggDp 5˸ Ifya:~T&6DЋȔG"SGcoC:򦅀țv#C6<~tY=DBY2a XhPKшIFb/hHP,q4~k3 HpU>—O|89'!luz7J_TyYjҊR NT9 ]¬J3\|Ak><嬞m90ZH&+U.9/Jʼn$ZHJxQײ¼*U g$/,x*Ǩ T\Ȧ^62lH7rprdM$Gn&yJ<Mrr31GvE$(Lf4.PB5Fy]R2Ҕc vCcTP]X;{3m5%Nz9S'@zVDVܺ 5ܲ2vR:Zrө!+w>!ҊcV/mgNASX8ɯǿz&>~[FzXή'Lkʑ! &Žή)GcC,〳pRr^`.p,L5"Ǿ2(gE}eYcԌϮ0@LrlEhBqf8,яe!z3^D"=J#Ǿ27֧c_F㽒~ّ+}gccy]~` ~A<pK:ߋc bY {9Y;A.c_ȧ}tof"f}V2¦#`|;6. ?(c(r3A3>T+Y8^mF|}FEo|Vǻgaȕg7]Kɨ =r_D3!~)Lы\8c.:GW-0 8@)H?Ӌ<63'lNfnq2̿<bоp"=,@v1:Ӽ@2G9u *̼SjY\Tk…^Hf"sXbb+Vgs|&|.[ϣqkcH5&] WԐQPY} \"N'2𢬔1 2R>;^řEk(/6P{bہP&u&Pԓr|(2` VNl<9rxt tj9Mԓiõ1G^ Bé.$.k ?&g'.0BoYJ0o,:LI@lfduCcGq^ a\3=UPkTۥ*Dz`NVo 0X|d(#-;bCPOm}d>Xt(PZzr17j4XEG0@X^EG80g ,~p3i/N-Q;xgZ*nkҶ+aZrj$Mu=|N]S׽ AUC,ܭX@[Cv?* 3!쎽 OIֿݱ7A!^)Uj>?6>7yVݐU`jK@Ą[9b-$޽8_%.Ϩ]=9\-Bb{<{U` v˿GNm{M Uj߫k y z2a]CfbM/D]v*NA ;m=o|2Z+mtX:mav taiOR-%%}"Iԙ[_ ;MYZe'Hku5u[>MyZ~Vu';o?jL?W[]TyR!xj*@%pT UR:uEo(~:Gӹ_dXz޳ЦٻFv$W}kyC=`> d.dITe0L).e=4^I?x2uo롘_1NmM5D IQ2EjN+][,)G?t^WY?]/ (׉R&o՛rE2++D)X TrL eU+Ր7tYnI'0:wMXݴdo.ז^grR09"2w{.{ֿx5\Ò"Ub>,;{bJL#Y[_q'@S*ce-ײYHNRB2]I&RP,aLmN Gc&P.@F%q#qȸnIDﶍDݿ"u`qds\m\ CSD0`p*ݷ|ǖLkF Y 4e"yU!IH2Nvz #dg8 xVΉe IX)jJNv@RJʒ<${T2yCbRD{OY>P,^+a5{NfE$hlT 6A a @XdxOmPH$/O&=jp 'PSM}% ;iC et)ɳS@qqGT(,0\mhW氺[?z_)б0!$(Xa6jE 4ZSnkћ'yE!?:b,i:'yh@I\Vشwn_U皔7ӻ-wzн/Z>)iw?Bj(PĘ#Tnh 37[fM2LӖ*{ /*ho'tu[>D~N4(o YLI<*_MñA0AsܯXUZ'Rۭ_Q&RYhu7LAF$A2d`5Ǥ rÔDG `CF8d:Iz1.$0"nq*`eB켺Ny?yc\x0As{t4V4 C#dr(e&ԷU;0hMG}Ȗ4g;FN1^]Ofm)7S7(qF .9\QB3#A>SXVq,UkzR()bՌ9}h{}nzDYʂ 7zqr+/>&u_U!w0>k"QA&y/j) BUH&#cFߺ܂|(gp.1}=]DVΑ;A(QyuL7,kNWTz FQyr3Q]RV b:ke/5!h.jowEZ wX۵~X@*bdbG]!TqڃvגZ`EdaeJ*Њ 8#\oc=mau#z)X\I"E͉eEXm2CvTnww .EdY)BQI4J*pj0| فg^rM4uG'@wN&o[&gjRWаN:J! 9}ƭW#2i^="?BXeSGD5U4`cxec#.纹wE*sM/{¶6"^pRǑ v׶p%? xK9φ^ Djx fLme4^lj}"u^ѥjd9lr[ގzӢeԥEi![oBFEGPc:c40.OF(!) ݿw7K8WcP™A0}|bQs[T{vT( )G.# =ssAR<3jKR#?0O@,puƺAlF+IS(C0\&h)i zɫnP'o=LIDTt\ݖD]ڰU.IIУR2Mtmc&4yN@ l k%ߘ)P_b}.i6`ƶ3.Q_4lo/zCsf__[KF͉f=?wd@%Hk\o(8Foq~NP@;y% EE51/ѮAm4Z2X BC39F%8^2!R"<:1y#<#!v1~μƼ~?"xQދV/d bߨav~7]Sev~-Ev| yv1ed1jƌ]fM^{P移Ǡ'W\)2Ri8 bfcC3>[+,*,_n!7cNATiʹCuq <ϽMnS:YSɬcp;S$J@V7*R? -4!|RL,U7>RZ LC\\+NPǰSEy&ܑJ) Bwņˡ8FF ~˹]3 ˠ$l'6oͳ.mQ 4*gki m_Iՙǂc .L'O^HS^U@pʙ*FJ^aB3X!+1f4(+)s XWH/襭Y㺂üZ(?%laVe13[ jF//5Z)9s: ˟P&=-&Vb,~T$,ݥ%V,8bx{Hgpč.eWqZdRo*6M] ielS(k74%} Z.`H r3'vKC?,I5䒃;\bǷ8d|2c˦ў״ưN ή(cW0Dr&g8C`QdH(`2bئFO 0p[̖ m˲%u8gPJw`^}R*u V61sBJh҄lh5^a>6Zӕ& 5ѥUR%0ӿ)*tY XJhtTE+XylօOh6ABȤWAn5;hU=7d=iɕepw8+j(E˯EbF>1xn纾MB*qeµjr*Tv+7#of*O`:2oPm{Wf3𱲫F5H-}ani7+D !"w&H J鞢pyW{fr%,gI`v \n0ڢs*sϑF;?U;[ԅVtd ʊ+$`@  %8D$N G%e B^k6dϖF=$vg= '*V[K8Zh}ȶ!q:[=IF%4¨ƴ/m2 W K@_3 btH҉4-2;c@[a1pł9LA.^0Mꒇ,gx<|ץйKYcswz~(bT7F4R3gFYKښ,1&-~ ] t*6 wl:փ> 6~XrZ5В(Z+{7 P3_F%1#zY2R(Z=hmÞ+V,' u=_kX`(EPCΘLl\YbϿh5YFɏ9`]c\׃wIs2ҴX^n~r`Nuύ! m*";Eb0Ur͕t-롾L*/xuOd]V".Hwb(< iCI%~ӆBL4mwzEi%Cx h!Lȇl2RFTJ(D .ڸb̧5;-F ՉY/]*C {Y풯8kb>GJnP{6/4޽wװk'@czM*d ީ;y6R&}5(pR\цoNYiSL3gwɖGؚcNk/7L`Oy fxCl'SIv_(+o+cf_4SAʹJ&۬G޹YR esCaevBH瓵o,ac!t*rQyvOl9VZD1ĉMa,ccFa?(ε)<>b] oQp=;B!N5iC l VmQcE=Z/eu xzkB<*YR'X"geX3 n } )?`zjWcМ5)q<):%( `.kL67?lQBcmE4DXj턯w%O;a^G`(7 f0/W@ܣlm8 Fu4%l'&2bzV5ydDE/5 &)EAgbQ>5}#z8sl m?%U4G<$>7l7Wi†t 껣A«Je/ԃE<)Z:Dh+lY>%bD]phF xsrJi}}ƵOb1Y*Ɋ l.hc$ 7PQ$@?NpM<۴,FJ]cq '0n ޷_7'kǴPTTfI.+81#Wӕs?ؗqscj>~܏g(rnpsT-(z [a=2XHfUZ|f5CSNiI<F!ޫqXgG ݸn*k c \%7jh}551ppt ;4n m< 3aRK6|أ+N m V^a>ZuT~SBPxH`MHuHgςIv]ĩti!8 !7 ALB$ hL$ Cd8̥,BC §Ytᤦ p!)8gˈ3",IX (x"# ϩ=EÆFe k{޻=9&^:JDD1P4@`x'Hwx:hL q2 ۤKq(qNMu"ϳų=N%4z1Т!止oTT`CUBs"uY&n0$pV?0RV ",h[jvhQHax\`GC+YB˸Si^+8_4& U&Aԥt]1ϭJ8ec-ˆZey!؅m5u[V8 M (ixL$ ,Id$*j{UA.଑p9'~a6]Z|Y촡k#ntƁLX""&w P(AD$ o::-*W%D_VB\g0̐dbjf2ʈRB6SsV@lݝ}g.泥BwEEL?U1%4KyxmGIS`Nnߣy zC8 2l `O.m9#vVyDg}&BեVoE`uN")? c+WBO6JfN bSk]CT؟1RI;8#Tm9a^͟ d+)yZY茈ԸC/C&a plkRa )[A\Η+,i;/֬.[_.Ų;m4|бÂD`?Ҟ3`(|JPK7Xj\q<7pv >K&#>Z>;̗QjH]eJ!YXVic8W("Z( AHDL;mGmܲRR:Eop緶 tlG+ 7&Kʬ/ VQMu#a "PBqPwxq1΅% mLd%4/fYuHGЌ'9%AǦgLWH e=( 9yȣbFDytŁ:ySH^a ڗlZ8WK(>O_A0&%bfh>Kʚ;~/w?iiq?KV@(6GVs|wCY9Ȱ&3Z.OdǛPSתXLw՟ݒ^h.I8iD;f -dI3BqyRg|Ն(~IyV>>,Y6l|9ξ8Z_~C-Vr`|*l`ϒs W4yNe8M6GA/wi4d<g? e%1ZWJ⚮?`jpģ*4%SJTh|)c=mTQZ(YvZS 밠otAyo)vPY5j[טyVŪɐVmۉWaYZ&`'":I߮_!R,2| cC C*[`F-Q&FP!xa]VAA4rBpAmy<_&=eu5snCc(ϣl<_k0mx_xC$A9e[qZ~ߍX?ek/Gy!?o!`9l9Ώ`OFS0C s̍?h)7(ͪ%e'  4%O<:<-=(r \=5{ |S10bmMhu?ZjC+V.COJ#5ԭeoro篥&~\KUO?. S H!i[oNo#A*bs'k,"k`+B6bО`_ Ӌ!{J B-x% IޏoNͿqSNZȕHvov6>6yJkm,71!lnW[,ے },Q7Gɣ<$iwɯ,V)\+cIӚ P, "l|F1rJ!Io &vnw@=xw˨aTkkDPȑ!xױjE`GfG}EI}k3mQڙ ;|$w?V.@fj+5Gy?]>1;rɏMʯqGߊ,eΌ_eWhT=j?֖˩˒湸͂U"ͯNPLEGӋ_bMa2ճ#rԺaֽ*wݻr{~|GY2ocoPYd6ּ26[FLTo!v|8{rzCZ(mWg 5Km^V1wm1NO04jei%B[TA)>zASܗg?W۱g.X{R6XÁ}n_A.p-bZԏNc5wף׃Pi# "hcE3Dhzl=>unSGHº¡+E1\6/:O}C m @$?CIӹ@;yhi+L|:l:~s kvwW$Ώ=6 TacB*f3a#;%m_ΗK?_KF09z>L+-p\ .v<ކ!241e/o?K+"^i$!.,EZmQ̃J1kEp%}V×rzjvk$B(Gn`&pqg)8% nuTs/<84X 3[s`i٥p2|)Щ.W@F=hT 59J0RA İEf \FhM6w5Ȏ3(XL}9֬޻K`c#{̘z Vu,A<1'Z"*Xel6X GSw7\Ok+X%wGJ\䘧GMb1rZ eLD'gi0R&mwZu88^&S𘖯D0{퐾iZTCv+@.h:7-Y#O 6CKPȐh\H&4 ,^;@xt+Sh k cʤ໓ yu2ioKQ뛣/WA$/g"yi̍雭 ZK5ƈ#qM)ʗBΤC;hݓgI߄tJw+]ApǼ~‡s&Ӿ'+{OswfjbJ#'[POB2gx98` AӸMdP_2 7]gʇ]V|fO~}Pr&7 !0w {7 X 4i+$DpJffZe6cp*+ 3ޏ&wk< " c "8ZF7Ds[ٷ'A4$OqkbٕaG`Z#5BiQkГY?A@TcG譑AH:JHp% a2ġ, J^ͼߜzvay@S 7LU{/m>$qbba~T߱6Ge鶟#gƦoM[)tXt2#  M8>| l eO" zmV.j{1J Dܯ/oüޭF/?}E0\iȉ@⠣SV??{GN7?٩ؙe ]0y4,RsXk(9ʄܣS@.HYѵl4@V^hX`g-Fg-keQ?r!.)%+1&0Qj& Gaj5!*qp9b:VG\g xgfq8?`q6i0Ln?uE ,]`Q.-W4(IV!)3,JZƢyzkg)t' ci8_իՐ.li]֮zPOE~MrNh* ! B_ReR (#]K>Zn0[m3\,aVZ;Tj1f^V>{R7or"ơņ~CM@h#c[\4ؐsXί-֟crU)t2}]k-j  ŢQN7飝䷾yjkm\K[8wjuPq8gB8rd#6g$[1U.fŌW,f,GRp$4b"X`4Yjv;E A\"V&JMz$U[BFR%~8?S1럟kY6kPť8*G^ȫ\yUG|(sJj` &$(^qaY^r멓T%Ɣ-V|ݖ| AOEK]'lOa~vIUk0wg4qxd2reslViٗ'eע>NoWW^G\t86LzR=6UDv8!MۑmP_En\/P..hPM{xA)6d `Wm8\]-4z|jP-+@c &i%M0xd<p?dw PB@fX~l~Y~pz~K--7Sx':Gq{G;ϻ??F 5q_[,~l[lV-6"RnAۂ6M^Ȣ"_{;aIBH$LҕP Z~KKZ/ >H٪Ϯ&gኖ;s[@b}[ΓfWx`=RnE`" !z KVmb<?uc(:jqĨS'Kx *}>ń8kuZFi..gK N)2hY@E{=?;IV5)䊔ғpNoqnVz(<^Ȅ, B[[~r7vxqyR\nknwoĒyJ8Cs0&']I>e4Pۣ*dvQ)w,.PFw.ݮ4X,wv#B?Nf ZxT8dANsdH1t.+c&ƋBPQ蛾MV;%^jrv7& [JJk#< NFP X BvH6i4~X֎֐mSӰs 4vy[D̽׷a6;Gh@/t?4dp=^]Zkk|ҕϞ'*Uk< X}֙%+͚N8zTT#:qNF^UP~<E7zh~ƙHŏ#}]J Br~d?iʕtÇb9s7ٱGI+ENmxn:S@yQd``ܴuu=SZtv}-:'up#&|W Uk>A#*Fo_g_c]}@LfWo#p19BAʺ#N:Nq}ko&],?.wc;Á4oNzIxg?迸 uX*|tO%C"lqH@[]YsG+xg*Y똈 Cc^fFQih97 & ! *۲]UYQhO0׸sJ0Bx%Nj&l4( ҵ̽Ga2G4!AYf6!=Z_(ezpxx#O C{CaݲqbKs!2jb[;#ADDp uDUyxyxקbǩ@)EqZxWSSņWr[{بjb M̿q L'BǰX8ct@2Xf/zM@҇4f#+kl [K@N~_A6b& ~51[磰|5"9F=:nŪGh<]3htq˽w.$v>HsyO}B~xv^ ,u:ӂHLtJتA^}ឿIJ&eeH+]~=X+"gУ ɐ*LjjN%iG;hg?L0لڳїwh;j\So|K?g |иf7ւ!z<ݳ`:*4.AE TNqQq|lyq搛ɻiLnx׀ogi jzr tMx+Xnye =+~N]W`35Be[!GN&]*Yv_xad/K].t\8DS {k-ij'$:J+m@dQ(YLZ'"vTQ[CY m]G5Y!Ҫ'*I.Փ".˸X_K*mhF'ئѡSŌ%u<`[ +5Yy5YZY= @I78Xto.u݃ ˉ.BTRݛٯb{P}O1 ,ͳj/>dcMef65Wh q>>qW""rN %Qt|qZ4.'*,1K4u϶- 'A>ᡘۙuX76RN*(D_YF}吔Ʋqc5Ny,zÃ3Su $ٓ eRi4߁I:mR^[ m\ cd Wt?#cz+0+m)2ԯ{Mjwq>_>$ccը|zY?m2f=Z6XS<[btV6t,hB% kبvp{c 4ӡc29׉A)4FA\ ?7RTÏih}t/+ k4M2BWqCt i_`e8RW9/+LBޒeѡ@&tqE>aֱr<;=|aSQf1L$:YIs Tc5ו+9ه6 QzVr;4gϭ 6=| 1c=dxAT{Njm~/c>v H2ebQ->MCd5dKW$aC]yJѕ|-% 7|eTe7v$yk7RGUo)I}TP|WzO/K>t-qÛq#5bT^c\|3C+҄eP }O2Y[wr^Lx}W¹J,}VKv#KAPhJe?P&4b=@T*;:fпS:<) Bwmts!H#CZ=MBgg,sL02꽞^‰w*E[5c`DdcX`P _Ӝt4? ̬I/sT|΃[`@gO[9A޵Tpe^rI;Ƭ̏lwNAQ:~={ӮYtoIi:X D%mdO2 y4k80 &ʠ~%jg)Cе]vu1/?;͔IT_'6E&A!xi9ߗ:Lʠ¡+5йrȹ3Ǖi=R ^ӷ*][4(ӫg'5,x6^e&)e\6(sm-sK1Rf_G4M_|h\}tټ0\^72N0e+'Ɲ$mH3=a/q蕰9t<h0sܭ4F~Dcd3A_ â@g$w'}P'T3ӧ{T.66L0 Pm""AO0VQi41*\ #ںB:8hpJʤ£*1hl\VKiJ4=HCp@9uAS"ud0 A[er M^XnGwrx0)NZl}E848=Zq ]-4MpBcIS'D=|Oқ7Y%Kh4n% m<?A/8C=)!: c}J `ITA 8}ZA@U&7KABJKP Gbv'˃x>Wj7hFkKlB3&JDq(+BTL㿁HXnp)14U.pt~cMunٷ]>wsČ"Y4Ku{^x:_C,6>(t@a(S"Tڣ Z<1ZF'@Z]ՁC7'w($С T8k% +Vֺe眒P\($ބOq[6TMȧf_ԒD&1rB 3:P 1DT|@pSG󧢸ØFM:LQPy/6޹AqVƮqFK sQOD ۀ[d& Fmdw`"vʑCOA;;kpRƲ}ngږM6(kg UܛѸp&Cq;'%ɗnDV?O>Y"&p _+ g@ް$gP 4WKb Mfi#r9,~o_T(ʅbupMZxFU8FyF@ bd!kPq94- GT(|ʆuV@SZrRo׏rЌ=)O\uuLwO>zoPNc;ջ,ɫܸorp᷉&bgyy?{׶Fd>0 wm!m)Y_'i](LH,Uh˪UqNd\{M\+V֯Ҹ yz}u\;XbOUSu,=Eh+i~X9,|(C2[s?tўAsr{SVdم'JX>Ƥ*)䗥Eh `Ieli>$t&T )"܁o՘f>UIջ[??M$cvܖo<;mh] 4U'Ú;kprxvq>992Ҏg~ͅ_<zayzwՑo I]ŗ_>~`ǽ;p_ ޶x^\<)义c6Г?M:ږ_.KXHi. חpiR^ rSΕOm4NksM:*S,=++s?ZW*WS#G2ue0M?Wp>1LvYiz-#)z 5TԩT^Zp:`*٦P$Y0\M)Q xHG_LQJgRި_4vِ36nf&ʻe'HGlH姉8Osu=s]OJ19TA-_m-ଔ*%4Gwu)2mGAxd/S> : `ytz{2%4S237wk˕)s'ٞm&&ҹ~}¬7{(;~ѿ#s>A?;{p.N=sn~2#|hMk^O&В#Ln:m3n9HjEV >G rE֬4 =⊥ <hy]]Ї2)m:%9c'`a٨KeYVV΍fFZ)D|-orE1r( miv"1P;HeC 'Qe)3n;w%!\ %tJ=V)8,x<)UjXּga>B᳽>VXj='O^?dhRq5y=le@LqZEꯐ2%dS9Ռ;bn>-۪25MV޴c3c= ԴveǏ؆$ woҹEO=[tn1νd&@, >{A]S^ ﰱ(vkb M@WL-r4E1J:wچ=Yk[[%B?j>e/_`"mUrXFn#s) I]KuGb5x'KYeIcIŦTq;. %2`cDl.Vy(Y0xkEl&bh* ٭CSo"aj r:sn)U-TS7VJ61 $YCzwhAbs֍xEyYVdդUn&řHR"ɦZːWj%˭D۠}"`%/|Bl2ziUӕZ?S/ r]6*SF6X!}\86wSHpǂK:`v,`pR KXyv<0񭺈o0K+'Q_fY=Cɛixx]w_ڮnħ&ҧa{1*sU{d;yE=ZrO7[@7;pt-"sN HԜ/޻InzF_Swd9pozsϓ)vSVѷ.*oٚ}#=g8~G\ @Z߾|/ ^T ,HE r+%4S\=g %,[5dKUK*B3'o?nL* qJsHJJ[|m;~Ԃi^H*% PL^5Ϧ8mxz {ql;ȳS=:gtG\wݹ!jAԾ2aT5T:waa\l6.X(CûL .3K}Vrp5Zn076 kʃ+i-5? [ΠeAHVV KlUs1nZ646IxGHclK`޵@^Zm$6tCd%=v۸6CR 6kȶtf-M榆0Fv}I V rR:r =Nj7vtQQa  "mpUmrXYXS!K8p^ԏ HNajm2xhY%i/<D'eorvfD3lEtNsb <6$Ցr~DI4CF˚,I?bJJdY0D):CKt>8^wyȶq,[K",0ـ:Ya UBUa$<]10@R{9HrX(;_>ueFKtG6a ,'n[JMUIws]\8m?>/ӋI}=~<_.i9$v\ d6 He Q;ZI/iW(s/dk2LACߛЩ`|)Gc=9;K k 9V.5sc 洔4vM~+&l:mF)VfЙ?ct`Ž8PAxġm2lnО e$8tġ[?qώCdk¡z֬m ;F : ϲsy;N7y1`UOTeRv@$W| :Uiia_oغK^)TCXڤ-'%%ENZ~'$kB=`.5Z4c: OyAy1FjneF͝$Wf~B:@Ìz<{.Qlv6zQjڡy텋E*<20CX#}&@v`m+,eVZS<oLcDzc^UڙŊ)%CGiOI[ q[^LX{'EbR0aװW|@RaT9eb9קe$ Ӝmј;X4Yuғ!;7Gdqt5^{^{$KVkt.H`9Mg<=kBOjՀ6RKIbm6)w-qH.h Fig0{àgn;ڒ%[m`KR*gUԶ ,TQLf0^#q;e {S4gvOpȌ7.?xGN/8 '`8_cbԪkoK1/ǂE n b 芐soAeuH.|q_P:Ck)d#HcH!a3BEK!8q^Ìfsށ D_Y(v~uuL!A]~;>l-3XSAQtej׮dr # 邅uIS))$d\HR̾ayVfCQ)hBPJ9q sd&W!w"jNg[}IX}Mв7KЦбé? p\؃*{cH1p:]tS +87fpJVժlHNQ5! Q[bJ%Y6жP4i(3oJlFiHXCdDssbu #eDaŖx>E|3aҕ?;F J# >03F16K&b&&E HT%[uVC(+Og-$Vk, 3ToPY{+x?W|aƐhz8^˛ FSJ1.1m( bNA#+{PYķm3B ֤lkHU|`/Cl/uhTfl )'aSO #?HvF3#x&/L97=̤s.$5s1lQ}oo `ȧHgP]醝7D[3*'XE,*e,Ԙtkc|i,PՒ@ؚL#wT`ӊ;wj{XZN}E>5 )/N@U{-5(Zm&͖[1 DŽlmS4T1PmN鑶k;=֯JYtܫt$vY,r_Uݱc8z-nwK?WWW^Mؿ^_]]\}kkGqWZ4I<}u/^m:Xf|RxOTMǻ)kw6:}7i1] ޽ke2)כ/W[>MYnۥ Zr1#k04|Pt}Dv.=ć--Ǐ2B?\4n# nTɝVx7AAߙԗ_~M&V#R2IZ{X'؄EI bQ4 $߯R55Y͸`5NE~D&͓"sG{G- 3GÀ~ xLXw!yfcey-2;T*UGJsi"^AʡR{UCߺd,d %3H;n~-6Eӧ3XsX{f,=*ƵNO۶nѝ VD*bScP5| C{m{8$ d=w#PĐűNŷuJY ކZb1dRsk*1,FM٘RIՕ9 mog9]ߣ(fb,W)k=lXԴE1|q2Y]j@QA1A3v:Ox ۺn$nYł'j,x8(؟c#G#feXv"aq!gZ#-cM)zݡÃxqviXW\֫,M}" U1YgŠjU| ִLֆ`J"yvltVOX|F5YϞ(@> "@āG=' ̉[TY~Ү$>VR<-Vc^L9 ΈJCj{A;J wOf=\'Hf| ;@aep*6`%=t:tJ*8Y'Q%X3Ŝ<#O15t%$QrCcaQu]>k_ꕻ_9K_A<|GXUގy9JL] JuEɁ9 T2s|1 $Bx FkGNMOn`{1_Մ-Fu5Uv;Cs;J>M0^|$H厡z>l Z ]'|3LJ4P,ش©6EgE11r\KdZs >N9Ti4?Դa ֓$vˑ`X`u҆A7vg ^Qrh@y-vzKR,G<߶} 5і7+g?tŪ<j;>k L^9kysZӻ!/Ø4wX'E1ɘ.|Aǡ)I)oS UQ^MoAyR/תFAE+t VpDF!EFm :22pۛT,g'{&>4(X:(gĎA,F%WYhL7!QDM!XЭMS5k](558A ADn*vsUG_&rf~S/%b f߀c'#l '%WjPіZ!;ABe?\83fxo ["Ħ EЇ{]x o2MO&yqoFx0Ow{Y}{˃_X_NQ7?0P7;Ie׿;X7+#ݹ+'߄ԉkyGgO,~T;=y|?wqǖ6"fŔ'H{ qGt|$P:?#>-^yY*"j12` twdۦxֽg,>>ܛݳ'ѧxW>^W\6cb5ѥbsbMV)YM4C+!Z ޜGC`r)`:4TQ/9 ~=n Eb&c7{YM?pYyzdȹ>O1HԊx[ VD2Uu=S(`@/Uz"===&q$Ј{#')DƵ>-[%FA)5Ml0`̻=?h1OSGF?;Q6mv̋ѝM 4p?x lYw(90gBt4kUFMt햅~̤E9^נcR&Ӡǰx >c٘lvrI' vUU32)!P[3a/,nB9cBO[hV6QoU [An *z$1>d}گ` Z؇7#ɱԢ" AP Ύk^ڍ1S =Ӯ &mm`>Zn$92'4P.O7;b+~pa)S^jZCw1[e~^ǫr:gY<s@=gU>cWճJ'Ru!0%SʮTZrCSg3Hf7Ia1.6`Q!%]Y@\gɵ BΑ)8MoR>hקoXIB&HC82$h)9˱I  >Ֆ++*‚Hh93%Aesdlѻ^R+oֹBˏŖ,Հm. {i! .6hm)f{z*ƤIXmdRCV|[Lڑ2R,}%%!Y"l& U_޵_z.t,;4NQ IyAƞl)F`G|;r٤ߖVY(WI=7Tid1tzU-w^p%D C&*$U`6;|r.7 =4E {,Z 9ň~FU]dda2u,[xO| O bhcN?'-e S|/O7\+t.q)i{3;L.[6mJxQ\Ls4*%$mfSaK ~^VQ% /8k21?w/џ(L7cMCbC98/`U;mŸȜ4v?fh#IN>1 L_% ީ`^Kh$7|ϸQ"C?zR duM ʫ)Ȫ2Ƶ.A*(Jq*;'Vr[ٚUQZ:uX@d\[ʺ&̊RMYo R'憳d aK+k y\"dp;!|nVBdR&5>I~%F}l$:)Ҕ‹}[vjUnH4HmtG$]+H-\'\S JbΞI35w,F2$CjZ61)e R/ql"NM*FNŗ*5(8D9GaJx$U αYxnT* ԰ &T%,U c% 3\pc(DM4|$űm*1L $ DSIû)XȮA (a6U1 .6 5f+UVx衠ܑҝ XϲXekh }u9j*ֽ~`9#xq1;?UoCm\w{I cwGw,׭4K&v뒭ٿ_CC/֟HG%2|t)`;K"(K_%9M3l !vdZ6K[p-PA`5JMua"Z ⴝ\"Ekm) 5ꡤVQQ'djY_I S#0 ROς#XgX͘gc"d28ҧ!bc-Vcs*YgX0>'- [9%VPL@A,I2e%MR!{[}IW3Mg̢U|)f!D uC%#tbs; :qd KQl ,Ј9N) hBMoI}>>e %씨PbS@ַm&Y/'=m󶍯7+S7wmSz -@5)UϜ[jh'} ƚqIdd1A1sp4"Xڰ3pI" <6|.o Wdd^s}bF8)U>^uIZM~t0/o/O)}BMTPX-]C'ӏAY=q*1`\h8IyGݻ{ ݖnR7c_e?\~}ysk+8!QniU1ރD SD^# Rg-V7A$nq$h Vy+qWud2xiQpj;qJvp`é3G[ E=.cٽDD3嬺Gp1}K}z'Pf]v^#:jgؘu+ɑ|H9Z2- ͙b;^n,}aX*VȹEΠgxo:AW鬱x5kKXc ov1 k{)$w z'>VݣmT:á1FyC1Fԙ`QVcsv3[qXv`_Ԏ8qށ=eG z^%FT}\{ _"1'zq ;oQo$92'H?c!(a~H:j:t[͘gNFl #`ʡẅ|:Jrw^b稓81fI=DYFK.#UO㥹`^ KlICS<E{Oo;Ey>Q3~Vg$yeܝ31 6/OmNyBko&8" |zuQehA:짵m,eU7Y_?}%^kSetzUbtEה{B=.>S߮fl`\va[R$lq$ SyeoZOԞ.ltZ)q5ᗔtAi]bS(F%];w8Ė*vn[ v,YuV} [9TvP#nN xׯX'@'%A;;Y#>xv ;iǖ5|?~}s5.>m߶`Cŗ!K,]oFW}Jgw6@P(}>t:-,ؔIYHQT_[HvfmgcQ'Z i幧Se׹Ks l^{ğQ\;ÐQ+rW:Ѷ`0TCRr!3!Qr24I1D2}!.J8Ssمmؗ8͐VlL!%R ]11Ѳx}|k:CثY]i[d)M\vvHa>|O4U¥w_$¾_l2ذ'Npcxv5&bc"|"?C,/ ư_^݆ޅfq2J_{bZ2gi" J-q*M:d}4{a}U&O ?w&qrus4P$"5+ʺCuRYM5f4=:P*I8*DxK,VP$5%/_Nr!ʹFG[9dA%MEZd)32 2~mA+N6GB[mrzH%c*yw NaZH ﴤ8X?purN( D}U֜E,9yy+}Y֖0M lʓ4 *L,'`x9 p©Dˌ 6J@ :j857cw>ZZ1D`JJѱBU"J*4p31SY!qgb@wWyMsWjzƠ{{0A捀Zo.gg]IF?"ERLP Ej ʼn1;Ӽh~xܗ_?|@Ku~ʢ5*8LQRr*,Ih]8/txa?#wlzeh;(ljR.U] mԕ)aDL:$p^}%14 4w40!̌A褓 4r+& vW=nx8BMYL2l9VH =ȥQQnRK O?Noʣ2ҮpK5yFɠ?t tRf7oP^.,r ɗ3F@n[}ͅ*nOӗx4̕| ~zN~ܙwZ_>eN˪#Kkc׭ά|SFbp|gPPA)؁tPfLժ4S:0DtY &TW  =XzfWѴsaDL;biSKMh=v 1-u_zʜQxBhH@i`T*gi?ǻMfinEYց8K{3n4I;zoNũ&d3=J,ovyWXwKu{4nyc殢ۋ2-<$kE>;=W͸[Y+p뛯7SO0UT"W W]R*.b\8k$u]M%۪Q[WٓzG;g2zf%7⧋.?htv|';8sڳ[ÞcC|5T?;I3| aĬ!}JψriJ.?ق^u$ºY))N*[DL󵂑uM(yE V3|"qƏoڍ9:3gPV tg> 4*΄w#B~H )V@HMVi0*TCso3a4~e9!~௓_H$`ϐ3&5Q_ȉԽNZ%2iΥ2jSiH!πrp,bb`C/(M41&߁wr!ᔈ+f="v>AD,~|hϝxCazM-UMT2]%9L-[Aޫ_yQv4l¶k䮡U d W[ ;N/q힖 R˹ T<#Rh_^)cåݝNPʬlg(Wz|cԵۆw<1P a7*1 uQٌiT %pph8\]>Gndž7akqB_,A^x|e9ɓTQɼT4_-<j#n;@KAİeP3s=$áסtLr9J+Hxʄȹu\ge7G<CC~7[T%5p3u~;[1iQMTg2Ie?Vc5dDCEL㾡Z8k'Dܤ Iul-7۫iQ{;W9\dz):=.OkFR JP00-2}Y)%JAIβ?aލL-˲ШuZx V1Yo)9:qdApRw>Sf,mMxAPL3 yxZyl䷴7Ydp reƄ 2'Z Y@;)qMû, l"A0G.dep56sA`Sި]޺L^O'{sR܁3ST)nGC6'd7L+P`]N(Jaq:%I9閯(G $PQbT:52%ZS\_/hI8ud0QSS@ieyK÷.N,0&Af$: 1!Mh8n> Z7Pөq% `Q#9zhOGh6"*p^j~yGRF ֗Rbxv9ͧS/.}Odکv;F4lN(VnD>y7aq_^ݞٯ[??ӣwEbs__O.ӐIzbZ.e7i' ZVI.ʹٻ6$Us\FwU 0Cֻ{@bo!RHZ=8W=˚ )b`D9鮪u= 15*9_Ά)PXb L!Y떳ևѽl!ŇC16_. Ƴ#BXU-r*-=GFl񂥘j=ՉAZ1򳛳â:13dY$H!+FcIˁ0OWrl>`oFxq$hvFKhZqqe(H4]T1ifNaCsaa?;קÚdiPDT;˵qY{K8WULݎo*V53}UחExv2i˞J#ͫ{vtt ]}\EEޅtT77ӯ~N2iOuuMۮ({ZS ׮n }/>ir?Flȁl$`Iű]<1ayhpX@g@U)THс}!]+^L砫O;Eh$-#{ ј\b od|Uhpp?Pm +|rsLUx>Q;wOPfcfn{{KH1{_qل|b*t|Ѹ$^ޏc3?;soEy+Z*% hg /_ b ԔT8dRr\d r^v{ot9|_'l2_mݸ{Z]:xw3ow3Z/=WW*]ߎ_I-du!hN~2vaT $EE⩠@74 KL]=~&Hx `'Cw/wW_,3 y)52'y͕GeYU`&R"Eo!;d}J*p wئ^Nv7k/|r=sBaj9p9×Bٷ/O|^tؓ>՚:~<,aE(鲖UוU}_rѤ}O+aQ5H`B'(W dH\qȵDȁ[GލttjkfvXtОA5 a[[ * hFxO5byO+bbM+dKʹᵽG1AfW)W1B2rh񂵜R71'xpi{b+6x,`[1e3aDamL_e\7j盒1z|~jfv8&_[ W9?-=Ӄa2KXKѽhn`ע_*"nf{Ӆ31>yP Pq-w%(j&bќ{ԁ#MOɄדRoqr%rtquyZ}!չUjQx^|fO/UY.C,nj+U;ֿ0eY pYn>jh2 2`三19V&'"<h/MSBc QyIy\戺٤O -]!PUvŲϥӁ3 ) e(|9hƓz> VYzSYS6qZ8ܢKj`KG^dGnldA[0) i/p>QMMx VYgPrTwUHKRTby>_/n/g!; &\?u9ݲRɯR.fU/b*Dˠe\"Y?PU{YxٸLTW02'IA43K5iEQD$H$L9 TRNIgRoj9ƭQP(r2m8zRzO'7W>t;zM31LewozE=x-^MƤ_,$LF톫)v<}ʫψO;cw}.j)~,\2#HLYHJqB~+M1lLbUB1HoIO+O* uկ0cK=Xk-N 2C/Nעm:wrVpSruqA; W} ;taxYYMs&]|y,{] |hN$I,{e*Œ0\ecsoFAmpLviz?Ƙb'9)YU\VZQsy60٥\͌%6#/ɶ ).f׬~tɶ1qrϞQR'cT{Vn31 Nqf6GYt%JK.CB2\n-X_4ȸ84B#Lio{ؒv=4^m;Ɂo{I>Myk"4\5.~PiĒea-V26LgźۡR /U|K[V{WW<\Ah\-yu\n jbj'ʎӚ8K= 给r#{ԚJ1YnQ;^Hc>u9UL6c@ O z,-ŷ Rti?rJ9ʊ`*H4V{C߷w'(zAG>+_~kZh;7_#Kiw͐V-9ک5W䦽wǴ"Hm6]IxZr$=sFh8d’Dm*]mӗhkEu_$GuV|—KVhӋM|NL1j"9iC&nFSixUBV  S/L阍4dShH-ˠ]Fa|4fb=l*ڼ Mxg#eR"HG1@\ R. zxEGRzc2{ 즪t[i}<ȇȖfLA:`CRJ4^򁌳/try$fʘE1YD/gS+Y)d*&@֭J "B 5d 犈ɑDt9bNZ':2s])oL$9!>SD PǜnCj&f)s$b@.Gk\R*&y"\J ܋tQ̠>N &F"yxU$k{O4z.Ѱ҅`cIhPrqL#t4n:.+By\*ka*`2]#Wjn@M̕cʂLϘN;yYkio{mZ iVfmrc[FtAkۉ"C/'rRI4C b<*˽Oڢg҂Jf))QY.uu] %Ug{6WG{ZҾr({c >zw$e-R]h,QKr73DV[]Va/4yS !;W'i|90_'l Y{ޱX-<#x0kU g9!oA6337}sq78mlƃO_5!dGFThvs޷3M><2 wS|2l:켗>ݪ8ceӜ+-{ٶުu1-6γ:fݽݺu6c\R+GҟжkltgXtқճ6/WKrK[WgW 2G3VUA@_/r3W} ^Gw؋퇽Eŝ v@,$y0 $T;i;ہt@z: ]"*r4N&?= HS /4U!"r!ObCzk]O @8!Lz:K }Ŏwa!IX\E6 ם٨1 ڝv&נ@qigX +b3E sABig2&0n=SWg!vhR[WV"=kUz,^ JM6^*ݣNE̕d[?JZ1ҒB<*zCCpJkNas(Gڞ :+\ܬ녂 } ]Lve7V&|lpM42Hd U/k?0κOi’Ju{V q (z_9Y97-ݔ냾vO/2%uPg:aWtBL'Tt*$43 `06ܛGIh$!1oo6,v!+{ڙX3ŏVY?-}Sߧ/˥hi_"N{3[2s5ni_g:!2lωZFu˻&Xb D-;CtDLH]^l1LtͰ+$ՖDҒpyKa(gzfj+Qwle*ǕBkWfTw)US wli9J=˩6 av6^F5U3ǦN܃(BǾ\u8 ec0W@ Ą7'_&?à<'F'IE=q%u"VӍU .e )DG["R PF5MxgX|nK2q OPyNCy^)sps}508lO4 J$x-qӷTa_$~1plPoS S1QS?1*+2VYgRXayH*)VcJbTfdn."wܳsYfaSwc5/>;LloVg/䯏g噄$Y6ܚ:FwLFE` HTT*'14G}D6DV_<Ts +'e]քviSX+&}:Au>BPHm"g wrsgs"/w*cԣQjq)v>(9 8S͹`K@kVjSBXc2JFP'܌+%;pbf:)_^Z\:>$giy‘a_LǪNYno0rgM{SK1fxCdOIy?UJmOdZ+gW3ןx?ՎeMz Kh~;`{uqsF7J^r%÷W;uUJ$.x|FX-㇋pW](>*pXtq=~vJ* 7 }w\['<ϓppg$+6F$V=JqIMgurq8 G__})mbfDB9&4W.ohN7)9N;w7K)T̵B~|ž޽R i~Rw}RI雔SZ-O1fGwT\ƴيkwusG!Zl~YKFň' fEv-'%p[Ϥ50bLvF!ZY&TvJw)OL=a?\era\'y2 2m!P㏕0}|D,1/6=_LL8H R[eq^Ovw0:`Vukq>V5VpXmӰ)<50`FEn3+6d@];Usrhm z~*9Z[E4 SO+6%ԟ$Y9oMeQTe"NAA|PGαa! tu,沥<}\1iOdY-_|Q + Zb:΢9,=3u/LԽEc[s͚od:.܉ˮs!' Azoj ll&S&6k^ h IsQ[E{2 bCOX1H)ɫz"yJF*mD6@60ӲA%fQYޙ[fFy)C7? dc>] =kؑc.-S]|` GK#.eV ^}B _A"` o݊Dxðb67 @kȻhIï{,dJejJf[Y5%)e}ҒCE2Ir-i"cَZ*{Y|1Q916|{հaՃRyjZkQ7u|ω"sXhs0ŽB]֭ݨcQ"Gos4mݍcpWv}/μ?Ƴ&1-1F!?0#_:;IS^M?]kޟNJS{>!@@/vq$K;ms*>fmڔ@H pE1reŮ'ټ|(b^o*f<ųy2 RI݆X#ZC(Dݯ:QR#Gn|!#AcL bAppp%%ro??z`){B(ȕƤ ) p;  :RDTHɟ~rF׎jlu1?Z n2niJBiuKI9Ey˹V \ {ZTΦ_/TgnbHceB|׿61/ȑA?anjŹlO.o &Lupj2|1퉢v֛FgMpְ煅ggI&Q W4ˍ@UG7 zx'#Odc`M c͆%wC?Fm"g@eI*%%j+΁) ^hhzbc;~gi驣8QZE@^W?gժF*@`{$r4k9[6z6YZ\i.3 i^ܩeQ%P~g+-o\[3qPűףc!lGw#/s$>OBzB 4Jd u2MeU": bYLz{UO?eV:#}($Yو ~BD\h[VeSI8_} Ht2Ty"Z2 AC?hdp<Xjۚ2S!ϧY|oquMڹx6H/[itX2h _MUovOamfp \rR#;.'y$RtׇoKՋldt? Q'm ooRk4ɜ6ů&o6;\dsi2\[h?[>]z5n:cX̠xi% G=JQG_Z1_jE\}իK e|{GpWLI:w7KcMR}ƅ.`VjU,'bAhDg}U ܒ_bh4wA(qά wݯc1\ջR#iFBj f-,knHƩb%6VMe<}cڭ T; h2-҅|U5U'P>ݖ- t B{ű ! w:F gK7[gK0tKfB%}ޗ}=|mm%{_/7_YV ܂o`.,Br*QWK2B}L_/#/2Ng%YEs[l(RapӇj;K0 ]0RB"6t{?;ih罢8p521:)G0#8Jn`>Իc0Rńx5`pN/ue2 (," %xs`ĞD cicļA\D)# xT3%F>¿J.%GmԸ!i^1AEsHĢĄy [4q1F+約ǧAdT fn$u|T|R:~\C[],`Z f 1&AQXN0EBFzX>ǣ¡QpBi=e \E0B-W3XӚ/Ûu^בbnh貓;ODPvC,s dn=q6eSy)KK) (ʹiud(Ee%uQO"45^WFܯ_bLDͷ7im % ﻙZߖZ > x]󤗲f1v"7+FWo4ߕ_SHz *fdd&l )c!.ƨo`E]RylΕ9}BnrfЖ4@1( N-0EO=55Xm#Zb\tFc ʀ<)`dTan!fDh \8>\Fhy8Ϟ|խF9cR=Z8 r*/ KtM݀ ZJ{/Z D:7I35Ep 7NJc>eg:ɇP rVa2;sJ ݗ 8? d}kŗg\qvRǿo}Mh9a|iEuĤ|[Xw?:~zA9r^Os,q}ZBWwd|]~g4Ƨ I9A|+k&qR_~) WN"UԪJMS)j)G)p71pd֘ =+!CZƻUL?i){|qz u:v 8RKǕb ~zOWd9S0x"dV6\japIN ~O rXؑr݄#,xQJXgM:<ax?%<GSݥxڤi9;޸ج+rqY3YלD I5hg*/BC٩oGli`jG{/9QZš2YW8?$Vki2`~IU5ajVդP;?uHchR ʼ bdϵ-?|噧`)`{xnb=Fpd=33HTSKZ>-@~{0f20LN *=ga*o(gx`MIi$iZ[? t>示_kh]93JR'x$*]Kwu.Rv{}Q^JwMo%LO?v[`>wtL\o,tLwX=b)b3*wMstצ&9߫@7!Π=N(T=Oԝ"s~r">0"4wVfNNȂy;qJOF+ AeH26Z]q,2,ȝos3 9/jЎ `VpL?`hv8UmY\[UmWH]e?:\z%qƳwnۅ;O=y|9f8\}w՗? go'?fSV9|;f>%jkm63VDNbxGmf-xWxum۷ch5DԸUǾOdzoLCw9<]WSuVœ>rh8g֬"f*~:(4 t])nd~ʡS< K|A"GG+:Kz,%dFwއ`r#ʼͣgoK|Fer^Q.j-1K<?gH7OK0U'"]㷦=}ʺ%Ѯ8gL&uĘ>!mx%$v]ᨧ;gm=9.cuCTRu+ }96L͖Qj!U Ì$P z;L68#ֺeEWRM^YlFfe8ȬyMyJ^Im\Q~E޶I8jsRKjvgNewhG{ J9Xۤԁ0ղ+@XKkUN-Ҹ4"7AWח`ܔWok+^i1A=wkj@d8`ޭcPnиEcI0s4:qpnx4g_Q?5 ^iW3I;f3O]QLOeITyf7>َRf@y q8j+LtLٻtK`?i"-I\D([cӾX\H>yW$S0*M1YMQǨq)IJʨ42J2%Ҭ.?)gMnϾ4_Тj+`1iʶќYrtRP`Zz|S98w㺭wqwճm/-{'@Ds*CUJyA{R}d%wdPРLHd]Nzc0>lE?>eoRV&eoRV?0,`ˑ{ΙT1`+##c 69`}$Y 7'jޗus:g>&YVD՞<HH:v;L䈼aZ!f-|4`QB '\wU䧓k+@UNI16u|1&Zmn[J 'I#+)BQ5{ 8\B%%W ˼vaFw~ 8~Y^+ہu%ւnM{]̜Diz l驟G e%L˶@aB;TLPB6M`)Xi|Ɨo˚0sp#tIAśʠ™K76#5dc8ɬ)s[&g3e%=fT9U<1m?1iDВTZh4majZj0B%= [BB!D9bi2n]>jNXq5F®m*ׂU4# Uf/zK,ȍk@Pu&ܲd9V*dbS˵Iw-^(HED|#ИQ}bSi~t噲+Y(*p! %.xg/)V[+: rOZ;=NU xÖu@lTR˛YބHMڑ\oǥ-sɀoVD/o8^x9»מt6Of)ԁ jwzv&?T˃}U uw‹=<ʪ+gK^Rg3'[^N>\0ތ5K׊{L0aLIu QF%HA+k 9'Jz,&rD# "=aR-P']A ./ ) |U`FH]`rEλ @WSQ:k-ׄ7]ϕ>%_ϥcD7J1#D2z7Mτ3%O_ 1'}Wgb?.}c$&X7ǺXw4s0V.t F  8y%gA"BðI!\v黦/jTu|Mb4M&+FX9%7 f] )*1!(F1FR;/̃68 1$9U {Δ &D_oQFUqf'kc&㦂cEas_WO)`]Yo$Gr+<yD'0lHX9/MA&Yc& 2"U_Dƕe1:޸qSC-^~?bGUXOf[ 2 a%z&nxVhIwš=脰S4!p 6ՈQrKBKGЈ!!`kB`5! IcnΜPfh۲W}W`p %`x^=v;5:懝< 8{}i#܌9[m#yFfNA+0"޾s2iX.~eԿ YG_Y")#RX]Tz5-+b/1gb9VXl[ "ȠERR(w)SX`iMq~UI-s߁ӆ.~[j3m*(j>tʢNe:%( i VZ#Z$V f(&HSS2 $:=]d:wP)؅mW֓N](H7R$P,8( Kh 7 ]Dg8.\Wuy_윀ZRr7)kw%&p4{f&;2-Y`D /~5>t}y ǁۘ\dl'Q .pvx41/#;gB^hsQ{t] Q?$k[Q5 J8~w_yOzjy$իyStS v#@Yۦ.b`'1zRHu[(T`?cQš_FFz8:\ϏRir_<}.(>w$ vUn ؙtʣi'{ <;ΰtxfKs-m{|m8/:4?E3򖰰'c'ܰ3'7fwܶNY>GӖ\Σizqr{J3N x]-%xz)ѶRMJtKTiҚ=&oȀq1Μncj8O-%L' ݅ę$ؕ 50nn{H,%>Vahp\b}vZnjjk5ҏY9NAAAgL&6~Bژ _)ȿ3 y(PD֒G$'X,`P.*С.KY&2qp;Mnq*ל$V B]>;SdtYE%9#d]_:Y$xmPe 18('S78@&H]%ݔ-ɩq, J^ޫ$W*M_^_zUWG̱3?ijT.vr`Y Bv:[AU8gV<G uJF4kXVuֶ؈pxp+g2H|lP0`Z=! &DF o7@HkUGjMy0[N픠-:/j.[LW ~_2mgǰG+Ѝ=1-x,gLYzz+V,^XV)emUO2LqCebHuǫ5nMkM.W]4?l[eѥkj7vtjd lCGuj2s #AqJndVl I_(I}ϑC@ #9\dG_O6䣨 HKI BfrN!2q׵s=?zߊ4eJ·9ԡEDN~èQA:!uʓd ـ1TͷBډn-JH`$/ Y|=5; !2qr-b! 0RɑK 4?Eƪ%X4tǫy|tn%w|tvaQ7%G֨DkxQS|e/]st?N(V'PTMi [K5uJ #J5fR j6!k yD_M9;A\D`RM['a29^x靼7"Lr2N[M-i9]`+Ke?*[ (C+L`JR/FMXAԇcG7j%e}@ f#e˽ly7YvyW/!M..n#'vO}O?>^EsOee2~yw_&~-颾ߌ_e:׳wR+ʼ<`wDz [>@#{Bw)#dߺǵnH`ݦ:S1ƺЋ6ʭoqH6V|)-߼f .\o|[\t[}>}=߿+ws v&?yAR*tnr?^qC~҅)L&2*ㅽR==yOm!!eKQR)xS2aɝ/`o^.x5_(:K1D3Iv rkxM 6_ܠ"T),mR="CsE!&RQ׺d5@ UHſEP rl[+Z"p&"ṕx#Vbx5G,QjSMUS ,[XR{.Wl䡜YA컃sD-oA4xRZ~<ɶdbԡ+jMħp*kUup;G<32zsr:8s@(ocY搰S։$:m727.$"abkRH10] La#I@tWGhWP  fu^@*)ltkgBf-_?fD0a"bE%o+t?p :/4!9-539FF0z90RY})'fcGM[bwvNcq2DR$$z{<yf?u " H$ێtw ' W+hE]+h(nFGZ *@!Fɭȉls+r0/1D:Q|L[ӒÛ?dFvW xyÚw|L.g_>|-;8d7R[?kCRY 輎ؠ!^AA2KHqg.t9ӿxp.]w63>!d?5ZG9vh ՈU tX]kUn8ƌ%XQ>cuaLi\oYb,gѳ^Y` v8W| HJ>f;Ǝzdj+pkyqT|pg[+`,y2JBLcJx+f% &sƳR;YoaT/t[&^pE(n`%,P E&n+IPnj׳•BH)cT& i=,ny1A(cRL05c(4q+f+Yմ1shbRb^0EC ܈JBG 5;Mޫwj(, Pm)s*NuaOxN^\\c0)/Wꎞ-./.R^M{mSj+ESl8f}\T82/:L^r>LY0#F E2D{N04pf,n5;ԧ4Zb }zՕE(eIu~GɭN7jLuF=td+P : ^N.>+[LQz=MuWuuͱ:m=q{hkB6fA0{=!^yz!aW/68}缡9+F"֥_/o%H4@ FW`"{kKQ6g!EMz zw_*F@hPߎ:D4R̹D"U;Ƹ6C/5Pq+C*#p ޖ:1t8֕492ۿ\v{6<^qȃ3W5sca<Ң@}cod#Ye]}K0rJs7oCAb{h&ãd_< ZГ=X힬"` '~H{!,̚ۀOF{`\{:c{*RlOj#7` o;!=]"B]d"E$`Ud&<QDd4}dz?M{ھ"JS|# $g Vʂ=|g/ ˻/J__F.G}|I]&⷟O)p{9C᣺plZ _{E0.,BJ:)b~]#&e+so%4)g=u'-hR:&8:Bvce/&b8Nj86jc{yzQEQ ZZC@6)d&Zm[˘-.N]{zhn.9.o)  4D3 R"H\f8&7)&qEGat6[#+%p.(s0 F48l)2x½PZ2rXҍֆY#?GbV+(0h;S%!J LX% >ZV&8 `P$1塀x\Cx,l,e襩a Reqc0_S7HVE>鉻^:kj7~A pkGw_/x>):q#q?:1G *D ,SДP~4}:L8JS`zuXꐬ&su ۰?fGqkW'QaqB8sD8_̪RO~Yr%摞YW:ݨNt7cߏa*LRJo} և4o_6t&mz%9p*b0ϲ>{u;+eZ p&4Ukݜރ\׺3Tݒ߁ښsJƉ8]{z?8Zg9QmbLηwbˤؼ~O?9);8h3o;pu,Ӎ0kEtE=O^_<~#ENp=1~vdKf*_tl}ɛ;(> ?uhɾb7ىGq4C+"U;D5.fY4api㠯|FGʘ(*f {oL`}c^3v_~)Z7{h oDU}e9*麀[oc) 50O~5>)z+nkMmeY o5k@& `Z` X% YcX3ljS1fdUɮt;?dId!FZCVGi 1BR)ld.>tdI9jےEDz mcZP 0:n8gp$ ;!>LTN'q/.?YtSۭ42T@ {dH:m"2#Ui(tvn:P@ : uA* h9Ɣ3eATHLіwJU3S ܼiYYY{m17 C:/~;`wi)' AR(2ڨ%{ 5.7̊Q89>7b\>%X`F`9c9K1M<y㛉X.$=,6o=D*/<;0ѫ|YG䮬Km.wM;;w(nFWFOL62{c&Zb_-h:jhَ!A^#ޘA@@kL&::)$NEW#4qfqƘAӚ ^R=X%?`_[dR"ڨ.!y < $F+myj 2LH"5pab`$ ǾbT4wV\عuʊ[ b ![2%SE\p)KpVe*ŮSL>,0XuPgޗP ,3 nItӴ "UPAˊZ?{䶍|5x?TKN6d*.[.:kYIcq&@)r8ݍFѽ8=Þ!vOkCtM/y+>L ?1\*PBJJN3IngJ8%$T!SC }*FUuS^'p^uʗy0Y2J LpˬJڂcLeH^ݶj,W8<X=`{WS?f_Q|8^ -ZZ|0*Ѡ\npʹұ_m]d1xC)7too]e8g\;2xC$s<ͦ@ʛ9MQk!o4 B\r/QRw1#y. փcv l2VWS2VנJ*Uv+CXk.Vwtnt(^F:g'brj9KlPW1zV+ZWsd"R'FcGS:,}D)E$H]ɩE}u5*@,slzA2pk͚bkˠ=k}PQΙAv< ېhZې(%7F1qO$LBf-!HN?~tQ@nXd>U?q;y MM~dDÃ(6RrS?6y6CЃ@ɄퟍlR{'`w WEv">IIZd1K~^$UZ QSO 4eQ^hHH:ő4nٿ|'*H(dqzp{8<@1e!'̽Uk*9Hg)L'猋+ל4^r(5}}`9B/ 7 d@p3$.I[8z6{CeLY]7;f|qI.tygAF*NLRt#H"@"VVBݪ%f)*}.9ĎSN4jPͪ*c5O+LHsX_dj 0~\vW(ywiAiĐQ#Y)jxmZi_AD;k3ʛCo~ɤB3R*gCtRQRҞB1{eTuWn>}=bVKb\S-&ˌd120"+D;bS{y7-;|89(n5@ZHtsX{x]áp^e_.@|st:8Xnv9}*E9P{S]ף_!>PgO|Bٍ :;<Q9sOB&,0Dl >27%"Zt~s ;8[mRNrVLI]If% d}1ӗD1u"w/I#NBm6LBcaBdGÄWcbeFq8e)RRc)SɅmJRňd*3 +1C(LjT#HyLupT4cn9F5jx 2T b=yvפ~\k}.^]4`/P:5vP41mLJe$iҘee'4VaMtJRäR$RhKb8,TjB%P\(KR>]+&MQ280 F_ª.M ㎕c DM}N]Bw/߽oӏo b0-˧{~q)S4m@bEck>ɿ 4NmȾ9΢6w^/Bd N%!\cE T$*R'JLc+&,͈c PX{] s/Kt24) K-=J%cdLk`s3#+H-(=I%XC8MBb8db̸R:0&!969ۙ120T,95#DYE$֚*N*9b#S'!Mo֎jyf5ٻ݃kq?ߙ"'#s]@I$, MbK(z$Jk)0l2&OŴS=2Kko'yTX̓ 8 ㎭a<և}vs5Cs켢`NNl)4 O1ߛT,b䌹uL x&4Zw;B8hx&7jēR58泀9c*FѦu(}\!O!GkJyeo7>_E"T[QY% c4=γ8S-(Ҡg7OzT1"}E|ƱMgx`d_rӣbB(lwI^}Q7mY5n .؟]7Jdd~GTIpN3mD*3єXrY*.oH fXbxmW0xd9% 1@) f=QwABIb iގ3OSɏ"UYx!"ww^sr0R6S8؋9P1Y~4`[b晏1XB?M ~x6͌6 [jK{a/?~a [{k,fvjǩ7\ x4~0v_܎>nhGΦ>m3t<+CZs#JUarci|‹Lg6&\ tFIH ʖ2dpRaLiS$ڜDYA9H&8񨸶&r5ٺӷ˥y)$kO'w`2AџM` W7ۥ(sDI7[2{3{dW7i2G cWv:巅 ׼T)>4ǔCV0@Tm[Bi5,kԲ i.Q_5A\]\.}f|sNwh!sW~s| yTRiiVwlɏx&^c8>1!VZ!2` 9k|y+ul#W˟?Y#@ &1 cKKeb7W Ō38sŒDP2tiXldX&iHE(C4fxj$R )Mdi^8[K!1X)؜ pdTNdBPNk,hW#-B699s^8q-KS©+^|i̘X)1|wADHYBf$yDC~@da2+aٱXG܉zM"XMV{ dgEﲢFi=tV:tzg,.{t3!O;fdml]TFd,'/x҅Jjr/`(5nxw"VF|?Ŕl@fK\-7`7vfG7=^\|?4ţ{x *&pǠr|/Z3N fCHи_c<1"}?LmRt^$~œ>׊ xJW<*Azpcmm aDsm D*R’49^ P[¤6'iT0Jq&Rs!˳8K`?G)j;YsWЏ>qT-ymL^| C/t*Z#nbsaG%]{H\{hyB<-Ow2\5Hd*Tu +AxBgevDc=N Bnc 91U@7jv;j%_2okwjcae*̐"mܤm$˿B`wrGc0H`;fF#[IN:=HʢˤJ莂 uUUO_}I}*j#-l-|zTS 'y+];ٲ]s'7NU/,nW:zhB/HHN/Įv߄_/_Ƌ`aK̻bX:٣p'ńoʸK4ғoX kVV6i$+(bƮvgn4h":kn6/khSv E4JO]ΨbiDtbhgOu$Љ[4M)$+(RÏ]h74vA#щ\v;[<*dV$O"X@Br)qlc;Mc4gn4h":kng[OUqhDjSHW.12k2 nv5-MD'v6-ihDjSHW.[2Ie-gZ ߖ'AN 8Kه`R\\jQS0ήeŽTkn#M*G:R1uuuu5ZXCeF3EֱEd5ߨ|92e+Tm-t:w*nmB"MXwO>.o0ZX zRB5!yB ɅPME.:Es$Tp TPcy/}Nd1|~oRzOrgcij*m2Z,i2)eÐLnB; oHL>B\!!/,ɸւ^YS#hHL1"mx$vh)~9Hhvv[omx$ܸ:+Pk/+84+f,k5v.Ԯ +vٍލ8(YY-xDU:`B_)rmW|d\$ws Aˌ[il4\LʌDܭYn@)36DV ׌P4n>}(,JDkGQ96#k057T94wh7o7hDNu90E`PQQhة05+05"$sE5)[(Tm#Frc)t(Gܹ/Q~6 5#3k cXy\[j PJRkԟ؅~};Lǝ+adtʑ4EbcIkv;܎p'77UTӛ} έEEz#n};WsO:5%ۆ/ khf>wmؽ!}s_`>4}Zy~ؙ<-q;hY7^٩(4x!C-ڐ ĞB6'pXo;17&I>{Vi+QEzހTy 8:N{?;cqOT91G9'uݪax[~o@{1l{qQs(.2SwJ%Y ?`lYBUqTI=~t! `Re·7CsxQbke]ґ qz:9%(f%%O ~ƫ=]RK~ %Nɡt[e䪋6-]NauSIލe.ikcNZGR0:$Guu-Ip0#Ыok9Fq/xNAƋ,76cB'* sq)iц|nw_;tz!w&& ͖ϛ$n\Eek 8{oRdHe"IN3e+3'<*JٌR¾̆ᜰaEd Ĥ_uIff^4%!F92tzW"z@ZȻ}@*ӈI)-,Rq⹠z5fsFCq,+3_>.i [-CɄ0b]h gs*w [B&8exd9\-F絖O8}Osm~KHŗz>E5Fp]f@{4P ]+FHo4YNVH83uE9 00Akq';]D [YS (Bs;P#{;[/|gێ\*O(7),ϙ °R- M[v/B=;6$( +$xIU䵏×xg &pNf'!|u;zz N@0[ @#pXxqF ҨֿsaVZ6-I~ ]9p##| !^G`9R"=Tg`憼M8Tr1O' (azmNJ<'ȻlN G[a5\3D:=Z[Z^!FDiErz(8w{!=!#syt,km-cۗ9Golٷf}2B2c(Sp!ޏ괬ίˬ|:]֌|y<6`#&:Fp'%(ɋ]6 ,(lDkq<Ӊ`ӷ/ut.ZU2IjMrGӜlFjn=x y:3PHΩE;mt8u}hNH?F~((@2p7 Sm@&m rMk-gCyK}oPRybB+nsU6Ͼ¯+}TmPݷAuUw;OS-7BZLz#dEH"gd6dtÇ0b"2%Na8ooEX"ſBoJϟ!|%[, q|crMŊtbZgP‰hMTV§$O "*PB-tX|o_ǟBO* vXfz7?'|q/[o*Q:DHn=^"QsnGnC>ajMw"ey1&*ce؝k%ʤ8XLQs",Icg[Fpf!(!DJ+ҍSm1׋E[7L1Ba /~w1̯44mFKnXSN dң H(b(pm^9]Ȍ~\kaCHA:5pa> f]ԏ6,xhTwjesڃ sLXknhW65_c u7̌ ~r<s2ECZueWѯ(]$Et& /U6oЛo;Xi4δ"_GZ?̸^9#z9(4wY1 #%iLRȺ JHS`^K7\lBR1![,j R1"q36"#~$ZZ}ޚ $LySQ/ /ܮ罕w*M|i#<0)0y׎ՈkkOEoxNXAW87_fW"cVRN~Uߡ _x5&||sQEd7a:Kg ^Z>bٓm4Ǣ/_HhZu>/?O"XnV"hv )D.[:jodS2UVaȀ;:7'7P&^5cg+DI\wt[F{s 8{Nr uC+Qdqӥ涼( uC1ۉ7qLC`fӅ꽓Sf #ndNƔoh$QԀϽ@~(,ibmg\OVSaOPD$P P2B193x #D^˜M LHғX8p gmHW vib 'd2ٗ ^ƺ$ 率,JDRn9^%dDQdUWuu :>ɨo:'A ǁ֎eUٰj8.G#έB5'i>^>E@Jl/ޝ-SZŕܬ],$i)[$gԅxir7@IjW`$ )Sf;-AjꞳVڀoMT7WjHWx67/m'QqvJiwnO:ZfgEZϕ~jU׊$p\jdh8楪#S՝#֋?Y`ܻ{ڭ 'yO6Bzc*kOMUmj>DҦ*.K Q2-0:2|ˠZZb@cَZp3"; @mʾ IGw -vooN&08nEп DZFo 2x B>Gǃi-o 3J5 J]͐G/_m&@O`W+!5$e.z4mW6op{Y!NS tRX5<5c* _VL gMM6kW Dk{5]@#ʺ1%a⚇CnfVK[@GA<"UyLr'Zi,ū EE\KMнe@c޷%ۧyBK)w5-Aӗ9Bʉtn'AV]=S_b\H=oK uaԵ]i\ڪ36 7EZpsqgێNfɬV}%Kɽ &_qh`GGiX3Y#_-Du5ݚ  ]kb;Np#1F*m_|7S*'{u~`l=H;"SY>?fûA% :iQ^?T9T?w/7)&wI׎E*7sLLʹG5+goEasșW*Wj?Qpѵþo/Ӫ۽h2y.4, afsnȺ_pnE:98'ubiS:!aO[sx)܅WdK#1.}X{<8g6jKCG v!xҤ9 ꨭd[8sm 0p\M}7{O6 HܕUvO   t' B5AwtʫݙV3'r|8h)ݭ;cܗ6z)7 ÑbvjBcv8c"Zn_aN0^N-%gdz8QWV3$q!jQvRܧDj]Llz6"fڧl/+ZXө7o޻: Tw8~v)-89]뚧קr'SXh2 {l>cnecRa=4/(>;&-g2ɽOa&V*ϗjI We!Ґ8Zu |Gq̦+*26#! z ri2yY@U K?̣c.4xl d:K 0?ט\RhyW(WY)ԛ'Rqf Iq(P$)\-|!Hy! < L$SY՘ڐHE2J)I{vGG\ תl* #*Ey05J&. y9f<f(Of?ҳg|kDH=J kx[ !Q)zmwWot(L,vRfzRىlz7 9<Ӓ `#- DS[ sag} ˾o$ÀP|%!X]fNC5n0D1ݡv t8+k2TeWv|k0X2Y:[Ah/̞i?\ѭD_19a#2׍܇C|zȠfadGګW:ڹfz_Fi)ɑ\_60;*9ˁ\%8.9+=o<,-RيK'j+&m5gwΣFѹuԻ@9Bl` h%_XVaSFRg$=mғdp"¸t>)U.e!]m4Cҝ $IvHHc[+Gԫu,uFS>7R䟎h|N>bNi=)1}槄{aTslC=ZاT ǩg)+qѿ|Jv &ns 8&+ud'0RPsR3:#]nA]t 9#8$)╚B/*eM)=Γ)NTi?jhŽKg+g'wOL guz-ᐑb(.],B#յ]wgęr~(7WfEXdGR(M9@B ƑQ,3ILHcJ8)AH[l֓fq4K"Bቖre*{0aƴ1 X\OJz/Va|Fױe .J)Q\PaZ&M /izUn o%7idD}1Ԍn%%gY7|>3/r5Z+1NPk[pr3)8\,@|]O+#|A> .{[\3foƋa4`+\R$b`xO,|KxNi:n=,1X-u{b.=Mh)Mj PKRRW{83}| &ĒiI߯~e`zek4=ύrt1#jHofўoZaJK ,ga؟/VB +;w?^λE-HCoF6:V0?,f,MM <Q"L"E\$q!((?7N%Ykk?z9:|%"-2AHv{Z6&"AR RMDIIM(1A)e1bDiM_ vV mQ愢J$,P<2ce vMf)~P_oBMR&ʦzAkiȫYW&I4[-Uex UDN4NՆdzǟ 6R/`rЅyx:֪7l8`xL+1.3cEgzH_!؝-)Àwvm #OIݺO$I%%fUV)Y>R_d\꧟1y26nrWTL8T) x+Ȳ&M:z?_ =s'fZQNt8ϊR19X&卫k?ݮ5R [!$cӫL#*z)NsiǃT2Ҏ|DhFeFڱ2 ǘB?L_tN1X5]K~Yn{bN}d+V4R)T?>g4Nll,\1Puxy\zrF/=IWJ/"hCE`V}l#lI/q:re FҴX\NQmh 5#!=( @H9Jd@YV XGI@L(ҟĘFG?# Wn՚F4$pUb BLe ɯ8X!(:hC@$挗Ai,xkNDbzRnoJuC /%Yq6vU"FWMr`y [&*ӱD1-Dp7A,HYVd(jMd"j-߁WҦ@_ B5DQ}ztBuun:51nJ}yxnwu;*k.WjIA>ܨ~ .ήNn7uCEk//rt@l an5IXBZR,_jӛKOe=2D9Y]2X*! Mҩ-/Vnyf#NWsDMYX6UH(ץALWd%F:7C:R^ <4clN9:R,P"ԏ<'С^tH"ѡgWԋFi͇roQvQ4Q2HG!v^0ԝf8iΜe||ߛJYj8+W,ɷ'9%yoNj'f%9)"Wmj{T_ ,"CZ.ƻM_w6 g )M5;ˀ4」 }An~bu7/![]z"]}x۶h1EC]8V^uxVÇDwʾ|z5^.E3<݃lDO.8Q|T&aQ7"F,7M-D3;5iӎp4*op:zz2@j7ҖQAC{RP~bD!O '"tX36٠T`/60"UmXdmwgqU #2G^3Ahźy9P5Xb]OuD!mxܶz¬ E+A9緶6s5=^*ZU| X]8@d"䗌wyxq"#nǴ32h]E>m,TρB_ m|hnN*M۞湥S`FVZ8S.{[ep;o`xvO]>=:N_';Pt>hS;Y!~cxqx?yJi;ɶv%xXӗ-I7\]稖z-/mwY1@][F-*Kܾ>؇ڗ=>"prA5epjtS㹜A0~qW,;_>=v2 .8CU( &fmt )&/i2LQb!CQ ʨC/yq )?3]K)`>6Բ_F9"q  => `f 2Oִ!3XY|~=TߦjEK q  $u":PCtXL/J&3bA8$} 1m>4uo䴽ݢ#_;2Ѽ`CIG:GgB`0D:{Ejˆ&N(+7iw;G IsmFa[6mo!~$Qmwaof}wr1X4Zs9?MdEkHc BY~7KVOKj&m-|O s}^+KT.(fw 鎆@+hU c̽eC6'1 /(>A "hJ@-1Vl c!52SM!Vh 4j 2QqR5QY_HR6r{;0%J90C $". #" -B`_^!Fj8RΤ0Cj`!U$Uic BVs!^KTd6mIߴ3nEw_/<\_S~>Rb¯_pzz~E~\?=} WqnnOx Ѫ/Gx1W9çŌ#-OzK3תugnV|w} 'і8OON?&BaA៣)9;;%_2 I).ibY0)i1pA7 ')&VcALō N8ZQ)9"a}4oǿ:Qe;ՒP-щ^vяI,Ƞ+q3+"V[)6$Rm@I=V i0 .є%x<*3TrQE܏O I҇|^4CuȁvHlI Պj0H@Zxr(2\F*V A)z%7 Bv⊧uSU=@!\j%U 0b*Qq  1r)vwa : ? M-!f.jScH%SQS͘t/ יj]YI3L" 6!-VĀS ‘(cDG(CE5-%94 Y(E:ZX`0dRM!Zve޷h&3XbɓlH\?LoNՃtJa3(X𭙟LO./''d=YyFt1 }0ϹD=39skQ93ywtc>|(Mbs3\}؏ Q+0ۻWڜ-+ 7en.ӿd~Vlg6b6~i];D׫p>!z/_|bfő=< UA22 O!voil^Vh08(Y {'=Qz^)=r,3}fbj-bz1cݳIYz4;gߓgYճ9EպIuet&B҇OQD>|$}`!ӧZny}/}f}PoLLQmAe ʂ*G;s(RmswkSʪUD~/i-7sKkҙt[ V\P7y:̀ۻta?,:NjrcSǚj;G'G7龄g>,`#-jK˖sv Ne|kw_"~O'B7,m? epΔfb] >Zq߮Q9Y(y\TV' /"xu*Ζk/C4r.xyv[bH΃ ΜMg; [@9F`AFXm5=lƏ bhޝbqӦ8/v٩N8HD>ɞ^M?KEH,QTTKR $9?Ռm]*Mag}@Sw`ьVEZjUY@5; શ^䦁Jj1RfbmwC8bAGbCIz@0zID VhEPz@{N"FoشWؾOLz1".V[ MCڇOj zї,cblSs #2 H0$"#ZtiĬ4c&M8o7>"<^_%d{fng_RiD}%\OSn9̺.[g,gV;$4s4Eb$DsfO(*R?6Id!"/ JRvzPF#U8i5xIzvbiU/SOVN,j4Ғ ׊( s:dJĄhX"^M`8Q*5іNgKM1y(`A th E߈p3CC:VpS\ŘT~%ZAMҪem9_3k(N/`% :HY^,S;^>X: |Ӈ].@b٬ 1ikRBFH)eǵPkm Lij x"q:Fk,+GrO6JŏM HF*}lVjz^@*8A =j EDױW$5y4DT0B$=RUY`Ak""\?i#F5zEf/b2쉡68A? kJK7cj=NxBA #TkiY3k)Q%BO93Yٻeb4rI !c3`K:{?Hm\m\#l8IP@8&7""Bʭ r]NYa}=`tPFi޿l>F_e$O~uk{u}G2?g7G!,7A gw=¬0GZkvCJGzmTQkQ-2FR XiA98's1eZ!̛GXҢ\wZ\w %EYâWV&| ùٱsQc4X* |FycH LJռ]/ܡ4P^nu7@BHJhR. I!|M~{Ok?x/!Ζ0O_SW8}r/L aLFf:fw<6 A ÖWnFX~oDx zݒR 5Q2I/%)RLт5=\liP`+ͭ+R8-Q|vNwQ$zj "?"3h_&6"!m6 Vh;3=fd@ C)DwA v@jXm+Y86BVU hΖ"(j  k(,- .Fy8 /WDgdܳLWOD!ZZ{q]{sz.M0<|0lp}uVR;oF=74>֟#,cPHOu|C-H&!I'M- wW.~ȧ3dء~vBkV5bYAae0Xi J"agv%|XrkOx^Μ-$ƒ 4M&%NT\b)1d,۾1($:ngSMYG#zxQ·|R<9wKHIũ2ij3Tff4]L%rK :eZ˕g^cc"M20-RT8(tyj WEyB`Mve|vl]"^/.Ǜt#c0W@m<)r}V_ڞb"0Y9[~8$8`(J/q%U7Nr[}?MWnte՚glNO^~$o} Ja`{NvZ{,zvD"Vy3&:~=r|*I0E&!9c4n`(Rm11dWaz*bpU W˱/)ړYӥyVԟQ7O2i8e_LpAZHf~{mg~Eu}>1=x gf3-OV`~[ xU81Bo*mxo>у3:?a3rR&w}-bO9ہ |ªjP lon6+Hf|:K >|^, *S" h*DHH,I=cavgv%p2&vǷ)FR[!*:2^.M~ㆹW/!Nw'" m{UM`4si"hQ6%YB4tG>%BAkx*|fBxw,>[spMC9_:$:/Y؈Nfٞ1^!bbaȋ} ų*QKK82Ez]]\ⅣvJ]bn5Y$d-/s=h.g'KNFIS% WVUfL0ՅhZI-jr~(S1ɤcFUg8z(y/BJͬ ǨKf5Ro]fቩ8< !eaQ,BYLϰiuR"b- SʶV=\eݚn7Q. q.2MSz]YwJ)֠X6o N),)Qw&¹rj~5P]4l5n떣\\(l DKqE~ֆ 鈮Mc%^Ǘ 3<-׽ s](F6GrTbmæA>&2N\EP*b _Dwv*yw } !gXt!Mag\(s9lͶvݻ/qpHv㏣}> ,[|.[|zx_xWgưt/v?9wq rNUww7֥{,hN%*F[W,ٺXxajH^~n<97?dCikiG0:q@N oB:vnKΉέ‚Ӈ[CHO\JkU]@NKN#]ՠf7rs@˼ag7Po||.RvRKEr륥[/UyAQGO(s!:jLI $v^)tÈ%`ܞR:'a6_v=woڧ&F@<5xW0/5OOmj\'g&iY& Y^KlVhdDAl25@,s !Jbj7lnPpn6XɳqA0>)S5J.aDK**90Vd),nwܖU Q˶gz9_> fga f׆yiCȫ,ՔLRH^*Ybnm~j_[*n>;}b4\󔥓{Bs '}.-8L+tДqxig[ƽ,ssD!%t;hIhA2lYpbLI-Ü4+#-ƽ<^PkTU𬘸,1C՛!"ņdR)1nT^Y,B X8uLWk,\b,\M,XXvctb5nYj' |-ג bV: o"\V\Bgx|Pʑ?X |1xH׵x37S3КX,7ǧ* if :ޜbo[ {Mš!I<2ϔnJZ@WmhՂ@t:l߶j]DX,Oouw? '9}k^H<|OoT&?Ԩ0ڈhb_)#58@ӨƳk29N5ޢY^=0Bޢq1p {b̾3+i|FK8F;Q7cQ0)&s afΟ̙qy"B9D!KI+_˝6pl+SkQ J7j3>P+p=\lr }H*sAa.Ui$Ԍ)לk(66o?d&*v-s09{AK$Gxjz@Wbp.D7`njΞ@6GZo Ѷ"Kí9|-gMF)I\{L]x8U>5V}ɳd_i2d4i T6f0hNӝگ3R6;HG| }&̹R,OMR8s۱?}뛨;i)zفěIԀ3P5 H#1kQPЙVf1eg U=XQNS*DJ,bñuoX \Lahϛip3Y qP~gkk6`0:{ O2=ImndLKgd W*gRUqj{_JIuᦁӀ0)NR#*{ՃM P{[EWXQ-tE+ъb1E&%ʫJʭri,^M{R҄*1D) 4qPc28=~a( "\(U q:d^T_8Mh~RJY!T;v?f6,!q[V[vMTn7Qۦ7{Ȱ=W^3/vZQ4 whH(]#o<'EV q>߯ gUϪoguE C LԹ*.w 5CTd*˲? #i-&{(ZI Ց=$@.DڲXSa8_bNѧ?k+eA;eG@~`QԒީ8R*h-}֊\"BPU) {MeK1&"0Ɖwg럷}NL?Y|:83ֽ|N }.,\:2", Su2}K4h˽Ukj :)%<``39'cGd 9ZTϓ& /x'8Vhtj } =%D6':2PE`b=+j[Y\d߭qCXܢ~%(/Cs^sVzp3QWذhuR$x ɨۃ|s! 4`.R)D8%Q.` SX>Vq. _dž+p'Ԃb9[$q X@7+%=hY8~ FW Ɂ_a&:o%\;ϔbQR:p!V!Wܨ9[_T((zc̸w[Ƀv`AfCՃ39ӭ!P՗oWwЛyF7fe߬<ţvP3yhn(AG)Mi3yh+mw3خ@:۰to-`SҞ6u1N@iuew8+s#t6NDuG=Bng\-\7s(G M茯|uCX˰˻wGWo3W j=CatďbA/-返TMb mp,~ZboR&yK] &X'[|;5z3-PxR~J؟^B1NS}J׭oo>H7c{AJkuiȷeC_@޵:jX 7 G3!Y\e0fI!菓=t?3e)C| u?&w5iq:lVAL'?A~7;wsO6RˍƩoym;'*hg*n5}L~jWBcl<#W  }tjE`x-7>U!9*bSX/V'}Qe2|8 $2@w>=jYw~ɰ|~âe\j{sխ 8]qQ,Ń{-\lx*hf Bsxwxtёa\``cݭޞmvO .4tR&n;;>h}0m;=m yQC,{pXݬ+eV=ݫcvDoʻz&ݽeKxaV'DQwo_Ps46 p40p,3KKڽhNh}u-!;g^dN3;͋d8qJqH^y{Ґj4\$G#)tUHNT{)tF_"\#o^BI 嬸*ŕ_$'5466|Ї$NZBw_xԷ$]Ai;tYtKj.23=䴟Fɾ;z9a2ڟ<1J6/S}hⶽCc]sWz~UX r&e2 ah+҉i%sz'QJ).=ֻIw-ÂұZr1z)zj8[n G{>H09~<UińDfJҔ )9R9Sxt0+0,R##r/[{Ok!w~Ѹ`KKC#x5$ms)PF+J_\RSquJ D2[OcHI#ɱ!)f9%"g_= `߮=/RY5!7/96GLY9H .iyMJLNnZ~ nR@ P;6?y[1|;D3mI\mC\y;0"//ַrs@`~$-pŤfnu?k~K$eu1ڒ  q׍gk|&uFqssal-wz,,;[2lޏcH"DuG֑a4<,ցଛ=-J)6u0OmǤhFMͻTnnK[Unof0tZTme`Tg$ v1 f];'CmaF|g0]o͢ͅ;v$ݸy?溻 hM;zo Ic_.*@#o1MMo RY]ILb? [7z68Lt]MBϙN3P9)Н cU k]s~Gf6?("cx r _9 JQɇ7TP WZ}> bϼpjErqn2܏'?8LŒ5jڕvtz2h2Gݘ O<$ytbe![6Y_kF:J<F砷'$n}{<^/& /*[Sւu?5CQk985 /'c$jy xA p&ͨ\qShQBR 8Hy)!RM+%%"liIRzFNZ yEy5v0 l6eF1N7Pd $W,&7^ʓCvw+ ("?MǫױpBA"%u}Nj׫;w%buwu3ٴ?LfضIr!y +ƓE<]kTM~Xjiqz! =w?^FţQAB}ULTK an%o/jԯ 鹊pGϡ[7bDuRﱉu_\2̺gYZ.4*S'=XuB}XT N=6n?2etg-=кu!=W$=nG떊AI&mQM:n [sMҩMwwaTΰOA#ǘx1QM:/tݒkr4*SJ>FP|-&zMQbwYjdž֭ 鹊FF-%uӕ 5=3Li5]9;0jFQz9\[J0"5ѹ/ukCq55[f˕ ֿFwxj8\\ kA] 3aUTE݀m3,9ǜs̝%*s˕ 1Fcs1w8=3s9JP^Yas+Aֽc/s9J^Ys/rJIyx9f9cr%hpPc֌c9NWrZ"s9Jm#=BJs1wj͎cwcCtm=Ls1wcHSs9J1 U>ǜs̝L<ʜc9NWB}gPr̂2r9瘻\ i~p9f3sەQ;a5DUvV? ~W,N}b|E.<øF9R}.62p~ޘ`, RB~C-+C9#̄]Ra(ID!)P%@(1"F:k?L.W~^|~Mti1M/bfo(\%gCTd 4Bx pwD+S {7Aɑ@Dj|SFL0IMypXkEh))tBp-rF*BXr1XK)Ŝ!^0  b^#T&IP Ђ64^ZO#t@Y @,CB" <,GքD-Ե$.d͉;@ig`bQaqqO-)&P7*BBhs|(80ԕ# j(pUx)a8`!`D\Vc 1cL E*Q(95I:K'Q !G~̘:8q$al; _32v\ K!H@h8c*SZ+0٧*%xj+!GVɵe4s6088 BhߠIw,|_b7^u~w3Mm>Qi/sR CI7̫K<(~Al0WӓAD ͯE拵ߝ xX+zV?M?3Yr<~il0#U\~:d(ʩ L fQ^j|pM-%'Tc[K~pH8Gl7R<.YCȑDe$ O(n {`V 6$x TT,DZKGB F}vL)ÏXn EЏ~O\Tfox;mkF_DT8P4/f&3 l'UR|sIozw~۝&'.)G#D_^ח嬄`ja@%c`UF %D#` J =HǵѠ.hB$",޿+'LpqkeE`6/'܌|&EiƅTQү4RcCu8Մ2Z{$̇K_Tߋ禼QsI\5^r2O{˓4s&̔Njac_霪ܛOljПsy0& "%y9 LFbg|Fs4g%|"JVmq-@,) .ޓ7џK\g뷭hr , ۻcDʀU) vz CRj3{<A"`:l2x]ÍDO->oI_Mn4k&%` V ւYw1[/ _:[s6Ż"FbKRvwHF\U=Y/ fY%;+Aٹ%ݕ(\w,w| Elg5ZӸrĽk皁*"lm#_a%*<؞L2׮8N^|K,\.^rk)@ RĶ}OOVUma]}S|vq,dI5M?MMjHޠcVsp"xJ1d4*g\ 0! 4UFQ$AzUg<c <|Xfڝsm?|uN5U@? dAYvPAýUL=)WsLKh0}ckd# "8%;jSQp,N["eչ9i.%yj ?W!9ijJ%Ϲ9E} yNɎxz ;O:*<ZإuTvsdNmH Bj a3.uxJyPQ<Ԕ Z\OP7qG`$(jwEL)'1Z2hIIHOkQBmDQI^b%b o0p`v8uHH+₈'YپQBUIipu[v M֡S5BbM 4ZpD0F= @Y)yT&T= 6+Il֋FsX)p,HqcP2bVyIR8~C+@)46^Ҽ9DFR -aWju|LM [ T92 aDr#i+l$e+, k1gY_ yfn9\r":S#AIn`@=I"kAx-~ f2a bdɊS+tKΏu{7ŇxD,j d&pm3 {gş>9 GTi ܭve`zzcDc1"_y'(,; aї!м1[x}MH^8-0Dm[5ZC_K$䧍WުP#R.W+!0|٦1۩xƮ'|Z{ ċ]?WWg.gznZK]Y^㑬$D;_LIIdE d!R7}Hҧ-h:<yS*ϠE!=`u^z8O~kX k}WuLJcqυHLn}>}2p{J4Re;>mUPOnUb§6|Bog3 ^fnWU+B,ɦXF{!$0&v1kfa >M pob{pWp$!^8.J;CƧ> C4%:bk'Du~ؼt{YNMcuġSwl2d!Vg/F<[-Mڋ-ߺ@&\ 0Rn-X2Yc.2̪㖽70{ ,bVy,9֐"Dcq6ȵEf"6\ 0eQ|b'\68NAh!<3W-xf\ABRG ! )Rvc_ KsZUFX@*&٣M4Qhm"Gh Rڤ(16CP $I9TWP&G&`2I[L$)&uO\ɻe~t>P$'H0X=NP-[y I !r{ s-c}\jH0@0d)K\ʋH\*s労ErP*kȥLU U,Ibdz<Ҿ\rE@\*KNV>l |)"&(2ʂyOۢg P`ƒ}\M;6d HH9~%J.S\AQ ⒩aD!]b$FQYDŚFa׮1$1ό5P!5Ƽ'=' +% 3.~.\A7tpe0NoIf`Oa I0퍉Zi+E $wp#n5mILowWK(a/ueE%VՀUqNOs+$7H;Pf,ߵ֎{ C& X!(X|gCu-YkmD\#՛=ezEm0)!qp@ok%ěMFl,fQ:c`θߠ{vi)Dt.25b:n1v}v>1=+%3߾^u ;swUx "2,RI؉z]Țo*.o~fVA_w @*[%c QYĞ% )!* b.| ؟l1N8=kZktQ:G-]xsobV!cDst.玶8}H{āy&Mϛ|:p /3+/P)Xr{XpQ:]@Ǻ.)иG4֢֖Pfj(I)IGңH1b23p5sreaV)jlc]D(xl<`f JKc4K<kJ1e01!'iA^jsWhI"ZK2YQ}eyT&e` wZCI.))ڸE/L_N>cTyrw-VK%}^GwFf0o>S@?:˧0>y+M{1?/HYMsLqqGp;=eڷASmwXytYrU:{qTEv1vf΀̠Ky- ?=֊ӂat]]I֝>` EPO=I4)ps.DH3Y) @)U1fX b(רvJ:W=ѸO7*O) ֟C}6yeq~6Yp]4 vto2_61_oI3̏rnoxEGsη^j8,I RDsA2+76zLN+*.,چ\TA udݏ ܧuw6(8ע2LJ0E|DSXc/FdYpGPHFk]_#Nk8V(0%@[l\@)_cRRr3^|$EbX I;' >$GxGk8{SY[j%4u/JYX@)QᣱHʁ!0"XD@.40J 订9l| RMKԶGˮ*s#<k-F8 LYSM]F1 JiJb(jFV<7@MUģf1 aMB)EY#KPZB]ȕ={zZFUf 5/0?PA;/Вz0\6X@*}P$!4}AZΏweFVJzW-Wсh)7s=,t)O# N#cio??T>(]>̪\$k"_9iuh1_qS]@%axBG=![0bֲ<WNDʊH*.1ਣ9mcuvv8@$!/$(teiR:%Pc6|يp5˰Qz<؂{H՟4GTKo=:Dy4 \!c *{=B4z>ů.!P:ef:Dx?^ꮾsRijx%o>^t rc{{Va1R:p kRF4Z`mtq#/`Hr7c__сdE 0j NA)x'(6Vh:IV<Zm~] _2eJ!z16_ۻ2ܚo4(n }#( ?V/-H0cY{yH'3w<7pٯkڳ\u=Xw1{fw /S~)3E9޳/};s79{xne6g&$La;[~Ynas&> '!{ײ\/Ղr\TjGfJYD?"v:͠#r d62 Z攢O MP]^cuǟpAs殚SR̭Ե,Xs+Jh 1fu66(=Z ki#T մp|D2dit{AL(mҎ&ʈ "$=o.<]+: 7wHF0^'_3xnM!Dk·5=ĀHsuplhPUiPy3 &o7+e+b)-sAFWF[QrDnjж >D_99gznu1f.WPN->#8<4o_.D^|isq%G4nNv!xpc4b-"d=<)?7Gzr7"-PCk1 sc -FF=hxj`gڨ5 QM+ | 3GmР@@A@P0щ+EØfha1o_gkίns[,a^Xm.bc/e*8h}l݅|V8YTΧ].ڽsrW|p%_JiA v$"3qsg&D%59𠹉`CI$%n5d#|[2Є(Ow,ȿ&տ%}wZg01h )p#I[BgM6n#V{-P|8N&sM7M,{2E8$VSAFF˜wVzx GO{J؎m85oȣj2x(IqIH 0Ty ub; XpIK-Pwc$kDbP pkHG*@&J4ٟ"`"\XƀЏ F)bQbA&jڇI„'(%?<[=fYE iմ=8Z $TƩ2jl/r )vgs86B3EYNM 4í-&'flBVbSDɜ(rJ(G-3#ֆ3"'yNF.m kIDO6A-YǼ0=LTxPT Zf[.v3Տ7^.ȱL|ax=>|חtyt<\j=Kͩg,c wRW'.tN̟>.8- ݱMP>?> -~u @/ K,?k(XD-QyMfmM4K{<^! dNM',AOC$)ӂܺ:73qRf=;4P 7 B҃-M}IE<՜zB!pE $ςDBEDS4D|l ~{/Dbv5t}*gfm87L`&ڊ5Mr_z.|D/U[%Ag//?HBBIK_g LiSk+}wN,̃Do5/Q G508 3ŨLڐXDy\"TP,rkJGhG#h 9>bZE9b9M^I:!H>t6)du]/ S=➺\OqvwmhX>!y/ )_(,Yťp RNpu3\hEKRVJ+4EwҘ;s Q4ns8v~>yt{=uv}q''!&6trs{>V\ɦyw^QAQ_/;qN&P#*\$Ej˫ݸyXB֛b"AU>k.ː9 >b*L-]/4;1!P-zDk[rGٙYʸ_ᗵk6&vzۼLG j.+oT%@$KH OX8&ego,pS<=7#yR. *6uy~=~9&ДHcIj92m͓Q[-6F$OIdFI R#QۢՖ5**͘Qj@$JD g0SNG\"-u^,Gk BXJٚόр#`X(ijrrɬ`\d.L\(.I2YpKBKHda3gk ЈyZ?!Bxי[0_fFz,h˕H+ M-L/9Oȸ{ΖeZ)I2,9Vę\ m7/m17]PBy_<r꧋Sw!@⻆SL9vy)5XkX=ә{W+[^mW-3τ\^1" O)Fl.h{*tGs5*mk>|O5qp2֛1vC K\kN w5E=*t2әE :\1wOu/UG9JpI2E{Jl<6?AGiUOMEiioU_V{!u$%B;î~[xN*R"U݊>.s\ҳFTf}sh=|&C x_ߦQb˖QD]}^?;V>W,`XF`8 ýҸ7lFŅ`\AFFټ/ FE!][s怎ʌK^M$%╀ٸ5v ]7q?z!N=R< [hG.E3*8.pQ UsV2 BѫmĻ*=HIQDb;YںIJ mrCj];^_,S]'_G ggE&f7St ,oF{?$pBf6|Ee_8ƽ鿿 ׭$wzx=H5ߕdW?U߹ QqOɷWzUΏv^SeE֓p )%DK>F-w}Vdy:A$8S_!,?dKI/7$ɣ5u=j݀x5}}$SG%խ}"&|O?Ngw R{Zm“qLk wWqr‡mSX0o?6ͅFI*g0"1LkZv7<ޙLٻBN!(' ղ+(|8&9!wKE-8mE) n&;T䤀/De~uXW2U ^ N5C+(lxI/&2#bt z+c+D 2̆?=7XB׻&سFcE1+8؜.(V)- )3]pL3UaA9̷ d$H 'xRb x(<;0G#N G۝Y6P:S12G$z"^ښNqYdz=ܖ.JD1\m^== Sy2Xő ;*rp}o~5&gtNٞ}ƨ] hWL#-_LofnHc(P=)`ts~Sm֬ڻF_[q5:T488'dQ[rj-X+JRJS8ʘ)Hjx.C0J 6Õ+|g~OSWxʏqc})gGc*_)bH#FKrn+gP9+x Ǜ[ b> Ѥ~ KBj6~NJ"X~=}W|(\&l$phoPg3Fk&`Xc}1_z53lIJ]Twhv[2ΑhG,7?}sBC3xi"H'9%t|1P~d~XkA׎md9W.><"d{i2ɒﳹ}s)byJ@Iׂ ׺ȲD?jѼwEk$ҁCȽ ߜG0ßC]-*`*(CAsrV5ӟel9flK#ĚED @P9_>%_Er8oū|QN-Xv#HUWaG_~tldM-)Ov,$(,!5Ɲo%yYz ú.!.5{9t`huDn,1$SQJN,Ÿ$A*Yowy b,kw"|^\9:jT>ɍ_2cyŘ^ZmyNj-RX 5UoogȚ#z Q J{J> 5v^"JzTw ?O(5[~*{xrad|6DVRdWGtѭOaxQŚ?\ N"cD˕QxWKw%8$ST 'Rdi Âm D=C}ķƝ5"Ao>=;^lyk~6?KJ;iK0yT:ovәb=ϓd*q;mЄ>y^ȞגDJEq'U=@Z>i~d ZR+r+GRs*K C]mF9trj"KB]v`AEiLE'K |QYj6p؂-xе&y7ξl}fl]aZYN)3&h7Hi}SMi)ѽrudA Oi-y?⁻Z˒9Jγ}:wji^ Q ]P)}v9Xd J@'c=47DE*Uj *Mheod*,CiCs'bF を15!П:PDs'L"0RUBłIcuhu1솛( Ŏy~:TS9ot'"f S4Z~Ş1qVUO@Ylf"[BHfT~oB(b /,F!E|*3^QDjdPzS;Os;fOu1/%7e6}0,[> 2^NHS+]ا0坼yI=!f&adFj?2˘1ḧ!a%0%?@|`Q_cNiW+0NnfrDo7sG0U93 Il۾+W(t]-!&ʍ}uY_`|GM 韛s(^sjHg]Ya%2i`IJ`L1Iե֎Vu8<lGOvy YuYr@[q(Jsl!@Xn>]w(^zhy胑&5b}gq V2|@L#+pS 1E+f@,uߥ)VbD{/ņ/7DTU9$w>⻑_ι[?y6ctw?NC:ӭ;W0e ɔ_~Tſ+( )l%]w׋QQzJfepXyjjMF 208#sh J? ne{hٻn$W~dRRd0A3 $/ūn;q[[ʨ. ک(ly$t+m{#j~$7%%p?4*UI5c@o@yذ D'bcㄴ1~)_S0f'Z 6|]q=ܚN`xJcQnNG&]?Ѡ@~->"els]/CdSaؤxa UVji;h>Wh. u]ÁGY>ѭOO$_RRY!U[Y^εۙzMpkL77JWlptskl_ğ_ ^^^^yq>@N.؜IEo8 V3p%4JB<(,}*}2 /؟/fv OAmTt}uS6b߿\7\O6~?/ߝˏڇ uZ 8XWPgi`iG^gF Ux w7*D˗06 <^_Z6G嚗ZD l]ԝe`Vq;QeN hǴWSm H6ki̜{:HgccsA }m> VRyθbV]\lӃhؙ7g 0ob|D?rZ|Fyd6@U3īij*i.on4r$0'dϗ䇏gg$7fd(7"_0RdH+$;o'wWo%E2u2cNj Lkp ^1R׏9vӘc.YFG`Iܨ~&4. )!yuE˴ۿ))9.丘bJۦdc'53>:.)GD 337R=W) r}Q/]ߗja-]b# ٳ6[[IVՓV(}EbLnsFH JXC+KXcHP1j˜&d iXm}Q/W*FB-ڏ&Z:>D5w1$#KkK^"9`."$IG'Տ$=0Lg"d$&d! -w` 'FiI{bbwgLsuP3uCT+I -qz Ѣ$JZkHMTF3dޠ -$~4.~Ku6rvJ {j`?]wmvhSc cӭ{AȅRw5Ε%GoPuC L5uyMߌ]\ұ&l@6?+Jآ'JL&Oh\^4yb%!w襯'b~pz85ڈOOXv | ,x\%UC%LFϚA9ЁjJuڽmXX]@۬/WՊvdXa34AyoeNr6??֒&l9.? .{p<M.e+:K0H-~xw,q2m0"['ˆўr%\؜:z˓Onzz6W NkF8{82G悦`b{&yHouQȤ܅DEh @l ɗTn͟n+#ݼɕncI=с\Ig SE'eMLFKbKV,!T 6l , 2ϜĄp 1>M&$)2Ga2G!km]A~=1b#YZB11rfZ$ɴ %VAM.[E0=1V1 Yt% + KzyM[O-f16Y*33vtsIޠ!f :`J!̤(=ÌhDclI60z%S M|?sPm' i :Y*.Y@e˞OD.>iHlI*ݷv<mwh|7yȒ(pˆZ HZYxI.4 ijAY⭓̬gȚ/p0![d7V *U9ahض%__ÙM8/W '1>ο7ju>|r< ЏS"^:+N/R/cz|< } 7v3)$l_t=pWWw$j*!z/ؓ>[y>9L};`i(M*507*F91O|vփ$O/I$&ONO]bӐ cYvū?܇4q"l'! uIޤ]L//0hAJљ|}6M9_g3]/û<228͆uzgPf.sȶ[ޱH:VYZ%SqZ?ԽA5i횺ôg>nk5z J4T >p~Y!V<:.<:.<:.<:nhhLE1mB`^8*j)q$mZ(yHV٬3̡WEc}_ԫhW+G wr=0;~yQ=RXgAvOYƲsdqn5禭D붉''Z.^&xƛihOgd϶ *gd:2h2R';Sm2ۢ([2M4zYtBYDIHƝ^DV>y4L`t2^H*^R0ؾ(OM[rttB>A :śJGVҢIe)N6CT $6vu8I͵Qf!):ZUwKљ1GAvӈښ8vw]jz3g36+S@4;} yI[dM'GiRMi)Qr=ʤz  *Yo15)%>t/>.ᒌWxBwBu[%;}w,yߝkSߝ+jK{ߝ,ʊAQQ@<5@osNdUhJ{jUA`i2F?[Jo`}0;##^S~J5]#E-<;ݦ'%NaU$>/gUrYI=~'/V߽n>m/2]5yR.E}2\5"iVsۂn9myNYȌNb(-pgܓo˝ȷK9w!_9SPBZtGre#,7_=ӰW3–2.Knܽ)a-~H%J"!'`hDeQԣRY!0K,detbrW@B`1*YaIowiU57Rttz/=HIizE?>+{6ZÕKI=i%ROSvϮ[W^ H)}@2'J-}><$T z!dF`!Pb]5/'-+[|_gy; [xO'Y^ z\UGCd*\BWGhXK(ijD~lsyC]@h0I ,=^GE?}8 D}C8ږӟڵj/?,ίS] OBdXS0蒉;ZYu2q2nwYQ=v;]7s'eK7cR1~m;2H-Mڟr"mЀ%\_lu}QH =uZ~Jᄑ&;[%:% wY7bNEOww\rױN?2sԧzxl? huq3TFf$R(J =K{kTi9#C;|3`yeg `x/L: fn&2W\E)t>p(QIBzq]ŜDƤjQq-G%3Wy~ ZqUFy\08h_#/S PUu#V2r޵of*`V!Zks2gh*HMÎ7t?_U^haQ8 ; [kK}ZPq0NS`J'YAkm#*9'fMmH`t *0l?No}tD|Ѭ6Z4]92ٵ} K|{ƒ}k/{ (Y0r8=X>}fG|l*) zYgu; " #fn Wo[aZfX,W]PLm4luOe#oZ.H2x?opo} _NclnTvW|[~[ Oϟק4a_Fidb;/=yu2=ܞomZo)DΖߐ)kx/c?m->`G !!lB&h]k"E|s uϛg}j46pS ֌Ÿ^_~T5pWˊW5 Fڇꪶ[gңVYm;r4ȅT>>}w 8'e9ݯH=}8m[_&^M,̷kB6 *2iJHalrL6%N%d@;Ӱgx 9=K%VM5S~0 z9'dў?';DX;_p4}bVYVNPb7ʸFh!CRJ"`GjaˈY[=$/ ycESC.=u^ҕFkQhUyi 6 hcJӌfIo+ ׄLRk8)H;呴m{O}L5OsqJCi;=ΜnЗ+GpSB booO=ǩkW}8gs^Tq]۞>33qJmx# A$,AQ]1d`bLrIFJ(Pܨw+zӧ 1ɨAeZ*4t)*Cw- hу=aEuI2pR_}B˩*įzu{VnhԾ\=6fꔳO0a`$jr)o32snf͜"(1O ^oPzc}ch!BEua2bMݳky;a:練3. ^}(uŰ#@\0wU)x2vS w 3#OIfPN a{r<,6u _!jjlug~ws]z"/C`(=0^: V mFޞe( jcKy`JmʙSTQ̍Gqq "兏.(v2C3 HM5`5+8%{x R"q'Jdmd}lbQ&TD7pά,߳k&btp8f=.*]50sH+Tp_Al6ڤ,YJI8o48Lu_ <v?|JvzCZ 4FYPK뿞%`弯1i}@sϿ^>&ьYm6$N\!gWVce=Q0"e-ARxf5^;N@wXcx˛ri6BÉt\ rX=kyvG젇euLؔ`djCqrV(=4F5@/WܢF#ӌ`t_o=OAZMF9eRm3h9J( brɰM/{P4᠐& sç@73buzϮR!3Y cTeЪwSݹơֆXyq} vSBNw*-D@gM')d PQeG d\|!C 0J ҆x;fY}et{ GJvw$za!A^KFk[{yy10d-@#^O4rĴ㤢J&ʦ ٳ {B +yBos8c8pR/ =^]3բ hUL656R1;5 "_2ʚsshi1g 7{nx՝)Q.S6Z2޼#ogDL$sJjgrs9(YThњƭhN7u~v8A!4+v.\"4(8Yx_K` Tgɹ<l]}]2 }jM'?liU-hN|<") C5ӏXcHc2訳c@ʹCǥf_??&L5YVmJ7_kFd]W!`BjnÞxe 5ބIKd61` 霕Os=ُxwPixEG$Xm d/,(;܍YTySߞιaG;\Z֒**w; fyK:¯08f }7l(XD7 MUJ΅ qxgRZjz*X'%ע jCƫMu>I|i&s;vaܻ0z#,07*kTONOE-Ŀr}ȠӲD X|NN";Y//3ydyzRbc-pbyz:בy_O{:#&[&7n_N*\6qnn&Ū'A8.Nv}7s bqzOY[`}/a?O==nӖ\D]|AGhʗ'2DR ă &$,okKL2Ggy89˳ 7}U傅<\C$g]T>,Ov)NyGn=&C%2? [ܧ@, 2IsHqeis,i<XeXrJǦ*}kGDw2L yeonxq_ !w<)GFS%^1 9;$Bv.^)U/s_OKȝ qy2RGkBҎa>ULa9ýE! bwCK@`isg&6: r;C8 ŎCPBPA);w32-r)fSTsɭ#fWpG^E1cj4t[/]3I1PK,Db^G\7esm~Ҽ"%)0?+$%[U#!&`W#2F2)xiVг6ɨJ+ 0×{(pY혗_c,^") \{yTr!AΊ5(f' p[*e}X+k,oy7Vx#ˈ*AZD1^szeի,9']VP~(2\6QvE (>Pۗ5J3uNO⣞k ;'^W% IhvzNCtMP ^XbI0%K eG@Ud J ͬiJZAT J,9ƶ)x\ϸà}wckZ uYwWH.'- wQ'cr\XBw8J&jEj4|ipq>k; "-!%-]lYl+jbHt_^FU"(ƁRFυ*GʺHh FrGT`J]]f?N}k۹i$ޛ6덪 5C3<4@דy8󍐷yj1< F?G]bzb+DX?"LZ"`tm:JkVUDQZ?`z{rܟ!If1b+3zPY{u?Ke>4_Μ"I<1O P>[)^y)fK\4XLT5sTḅa ;7L …&i*GM[jgv^I &=ebL+k /Ņ ` 5@0Lߨ[r^nM8ݚ~֢z5;-ߩv#ÕS&뒲SXڷ .9¦ܵN`ʴ5!c@rfDԙJ} VB"qFk!#$ 95I)s(}x&pAcRBޖw\W4a/LX3K˸|F%FTG.aoF+q< ճ<;006R Ƹƫ0&FH#kąQ#3}9'DfrΟl K"" Vkm \T';I2HAMr1t5C23X+]Jy9Ώ/=_v6뫏/UaH2Ge LgJ ʏP &R9ȓDE%wc8)NcJ&N*s( ڽv7(c{6 }0p%b7XQʄbT+l+w7)w=p6L eGeDŽ;]9+ ZGmY\Z2Xw zV!DJR !p ݽG=< )/cUE4ERM%g TGldrRVʁ*@-&Q`dXi4s > H`t_ TXPidt UJ6RFcHB$M*0TL@RG[YԜB՚0H9k9U :kioQnq_&8ΩW3req4F7:edhETPDMUd;6^w(UXнc $RVJ*&H*a`q`[\z"w1:R-`U;6>wtFʵ~ iV =2]zKĵDk75?4ԠX#ChH xA G t'5sS:eciĐc7z1&z!6^?L'/I'Χr3t֧j!{ްz˽7?M|_ݽo[o6^omZtNCs߯qQ>:mgyx?rEomvk{fxx34ϻ<a4]$b9=M΍LNi cgkcвfLE"FOyf]kskcsg̈́lr3omnw>f㗝ֻ_~}L4A˭w?uAsmx|}^^i8?'52q?րfpc~{t'OHE"#5t{apKk|-~KKBb`ņx146(x1 ﵽG@Ys? y}_G7/<` W'Omމ9C<,`j -Oĺ'e tK{N;w:,->އO/ײNpd)N;g!=/+'_c۝ Y|d6_;;-{'q`k={GGv7?afBH^m7ld§tWx ,H6:狦tF\i:^{'E (?zcfM@}=Fjܷ!eݮƿ@s q5nH(q ]vyj'n2h0ϟf#=9/'d}owkW_e;M=8o/mk ϗa*P #4 b9xd%(7''p4&ܾ_%rk /DƦI}vku^ϥFVd\ʚ %tgrIqiiYeI$i^騙"c_sjFr74s[~llkx\{J{eS*7kLeLc3-ȌEog_K. jcn-^>uHbm(bGes -iiKiIL$2:U,$O1fGAu 6DJ9e.R y!{E(eP({6M4gZ1q$*X 7 |0O)%K5 R6`(0Pڨ9Ĭ\\/+kz\Gzz@k%7;]c%%%8]@|wN/Oʭc׳a^O :!cW"XUjұu/O.nI~cnkRStSkⶥwUXᗼF-LSmRHKy[Bb;U!GV+N!0[mD&DQZ45'3&Pڤ`Q:6iÔH9DuXthȜ S2FU}-p6z\+!B oRDI#6Ԁ  Qe,F*%BV5ΞpeZPp [Cf%!)cK!M5(JHZ필QQщbnp8x4k1#T$q Pda)CܲQPB4I򴡊X)nUp+|JÓAw+\ WUp+\`p͍‰%-K[\-,V8 q8DXTZ|y;^SO2 %L#2Ĵ e(WC@Uj qVyEȭq"Rh VSn$V pl:Ҧ,m:ҦӬ nӬ_Ae-19TMG,*%vB]4Dp,kH*JSm,fAI%s+%gcŽawmJPhm/pc &8 .ɵ*+N# xW[)dj/pp8QyU&u N|r$C,GNĀӟFUqDju(asi$O0 qi%R\ guR겔/4h-5s&KZT$R Hd)7`Ȥm(ӆ8) ]쨰0y2l41H+ Ii0 HCiI F!! h ,N#; I35G"K3~Tx_Fr1"\uw$6w{@% $$PIB@ yM9 yA Z-~pjf+ ɽ+ ɽbr2]6RήE.nJAp4:I5$J>K,:N 2F^!pڂ%*u`H$ j(sr >0:cREK:h1,Rl0JFY֢y̱RT9A9al%Rf t-rp;BAD>Ŕ`5.C`ZPR̻%yJ`C[G,Т6V#.v"4xx7C&˴PA%`s ("0-XI5`uM*XD0pLҍY[D b)U!-"k˂qjƑ%)➡)Dhh5_ojDxK0_rƔsDF }3;.9qC]zw;V{CPk UIR[oVGA>jyC8'ȃcW+8 A{pp{+4JÖb/V\j1q)3ޡX \KŘW(/gRcYצؤX@Y5+(~``1t{kfWXJ9gWasd EUPvlӅ6*Q-lزw%:nax8@^OZP%#.V٭ H1LAaZ`+B`"* ɡ]HP.JJ-P%+:&hLvZvP0dn۬dA< Nӧri'_4dw}A %pgKI{Zigv)iݥN!HɃw85,yIFhR$P"dwccn66S YӁ1$ ݛOipPekݙs*4UuEẅ́)ehz.aoHVH}`!O+d`W>,Y1}cP\hCtZbr|BTjuia=df磫u[M,aDYj'1p~y dL[ O 1h:ӡ8|M~O]96oY RYAAw+Gn?Su-27ZmuÓ>Tb]zBy_9?lK4^~q[4'pC5jqO|cV(N nDQxYCٶkˢ]B 4S T[yHFj҂qVGnXly  ,ٳ`:NGx4e:=64};1Ica!z(c,3 HnNeA}>hq<MUܱfv21Lv&6 ӱ~eΛo[!"yy4IS,!x€>up#Ta@UY{4՟Z@|U3JdP2X8DģsyJBJCcW.Kb(4 ^r 2ϯ'iQ U?~'3%nxKHQ%W)&ՎS% TNbZxY΄g)KP`6w#_0m4&~w:Z'NLkN? `dhG$M$$\4~賃Ww?`pt`zwKpKw gw3T{u|෣o^~7 5#Hsn9:|}W/_!sro^9z^߼x|OfC~|q:aۑ3ׯ::o5ا~i\<4C[_Z*^IaB\3pehL{^f$0A]d5u)G 0 bt__`fuڣoߚY{ va)Xu$<*:H] 4ȱ񯇟AЏK6~:~ɠ]<7۟˾ oλq|Q#/Qu 6?P&s恒HoŸ޾e3Tf4cbt{o*9!w|prup͏0\ m^O7ɛp(X)bzG%ۃTëګNx\Wx~eh[ءum۱W>0twFq; LN\0&+];{޵`oAC:CNNӛO]7,4t}Кn>j+x9<S[,أp]Sv|xKx";H㓓~·@+gFX+&j1?/Sh|\<'pؑ1uz/TR2eou+**94ror$~`>^^» `W3.XMU[LsN?9,Dn{,xA ۬ HFun@ 4R1Ŧct$}DJF1g+"utV7r$э3 Ԍ[q%7 ooG3[9!䩷G֐d.B3cG)zm$Mi8XV2 1+H|JSksrKv׮^k\e.Mu;R:g]q>N38o}lFCYGJ= z$o'2,qE6HS9)Q !ъHldhs%Д czz#9Rʥ(hM-zN쑏T[@T[@: h 3r TͷR+0Jtj{b#2L)Bp*`OhT$oT)9@_aGkBm0g1…캖d;-TIdM gA3i&JD9tj=F(0TpgiRƽ*5\gcG3Q00PP–Z8ۢv[<'珖mQ-jEs[LV+2@4qp]<! & 8ijt? z oq'6PvO@ޡ)]d2Ĩ&כ w)a̩D KI!g:c>x656 RluԦmm>'⤬MڴMڴMEt2 ϔxߣv-g*ZPP׆4vt%1 *+ڪl=ex}PLe1o DAc&@ޟ\1㮹;6nͅBR$>~fn>5n,ً;-,Xe@` Y 77}w^D՜h? <7w5;v0ZNҍ=7?;H0i"ge[+'gƈ:g)lZтGJkBZKf(l?U]dT-wr9,LLbGSF$.$B.eR' #q(NVcJ,ƛd 5g.וuzM^S5U9| F1!uYiTI$I "D |1u~)N1}bnWBB]`fdVrՁ^Iמa 3Uz֓0U@cxlx"7}nh4e>oDp~1k;ۻ-OmfȞ7^~%$6b=ޤ%i]-Ed! 䭛Ɔ,X7cT%XS0c @#e°c E2h [ 1>yX{Q>3`2ИrF}?^{ˍ$6/zy/V %݈tu⭎ #AY^u Bu l788e=;7afb$uXqm1HJ5'ƳIjʁ^gͯv5_:y=08 58m֍`i;c=RvB.5SI93w,dy_G쾖F~yCYAjvJpdOX$MT0ldAyw$(hűS'bf(} {6MZbqTz+ sbs" 5oVE!Mv 'Xj#WAivye2Q2yM ((n3TyhpݨY|3KPH&K볪kZ/KX|fV15ɹˬ ,,؄!%m( Ad[*9#lZ'Hn/n)3F\ڌKAۯȔ+t0PuoP;ԍ5\>BB6LRO\Hvϐ5 ػ޶nW Y,vːC&[,EvxM:Nj;ME>|#\$II#ԙpy!02yڲW.δL?nWK8*KKw{E|$ ퟉.ǛA>ڏ#HhwHh7H̩FK>woOshn`]Egl\ VU_ Fh-2)' >uwg/oy1./__Oƽ%{k;>ףy7զp"Ƅcf#yoj/mx eo?-~  /]WoMv%&]ivQ^}m~W/>}l~ዓG3fx^dڅoO|ʀgT.r)cmDsjE?beAc"S.wڑ8[݊/5|eϳppOhig+)GS;;gL.Ԧ6A>0 sT7,IsO`JxY*x+ 9S Tg=R풥 >QEq4s>Ucǿ! ֕"wiSc#$1V/IZn 'g'/ʌ'Z֙}Zrr9Vgb \]C4آLrFnjYmWgL3k=8GuCý)--qnޓ_>}?8[;-y?/K^Gg+HZ>OK8xohGcI?i*?M㢁Bx^8 9iPQ *E ]LjqD/)UXpvSÏ뵳jZATV͜lULƉPQ*%-9Lr4HW!73O|j׮NZ[I9R+rnv5[G[ F7cOt} vԫi^u=(g\k6AOV s?mZ^\a,xvzE"y8{zZM<0sߪs`̩y ߾C+ w +y~6Fl$y=J?Z+½xWLO_}*ɰen..Nn70otbLbQJYLDJ%=ֿSjt~ӬeJI` !>^RPp$D!+mӤYռ|*~tU9wOar󆗖Wr/_M|guo$85 A[()As4AE%ݍ!nDElYSm /^ 8uIh3s5.q8-&c+J `ʵ$t&Jɱk!;⢜_V<mVxt4¸ڇ՜O._ԥ7y:TSwy: uk74NLj5邐+a譪Cl&Y }a-%7z+xt[hYݹ𮪯aU`U`U`Uz.jYpc [!q豆"u0J) $ec#n#ȶT9seIDLp:XG2k-3~*~;K><ۋ[˛DVO7: Tط_~u+6ݼT=}rrK49W b?ov5> \o?Gc`V0zk+^~7(K/`)Dpq+me!֛YP!ѫifu+MoZ;q_㌆L^wPZ f*v>\Ckog0f[ꞿp?o|it|D"`u锨F2,XV̀P ]U+cL([g"z(j.ZUY8=].,>,mI,zkMш BpQ;Qu1{Fȱ՟Ȓ0"5v0A\(pDBgB4Y#pvfȻxUO zs;*L"؝[NVvsN˗Fm(Svj&uXK )C,*@F `,X'|u'EP'`mR+ 2t 3,BBIr/>(IzC~:j|(T,>Ց$+pE=6 JnZd) Hd&E@YdVD1ˌhB3 $mϐRb h`RCm@%%x@[ ˲h[o$mݢ<ُ\YKIv,оQo$%(0J%PF9%Gg4HlJ^ʥanUZgzxw`aXBf?l/d-kg0BbVCЖ@BAV@>` J ilvi,x;< .{gb}f9.R]*ZƠ!Xe~ fD!qN(/ͦ{X%XyOB\NRE26aSt  Up[L|T|0[J|N]fv<}tRQm%IA暜qV$ RIʽ`W??r$'s5vg7#40RB|Ad$0 OR_ !S ԱBi| 4bi%kJ26HgD`8v3`%$.y؇; j&L:g!F#\#Ջl.ۅ_sI,íN7u7!; yV#ʐ/xk>Iǭ.b< ؚ$`nk\iD>O :-4,Z cnU1 |4Qoߠ5/kۦšlsZ`y@ 6 v? qA)3vcʌ'1V%"9KFvڥ;kzeKαKCp/v2%D;C/-\T6"gI@ZۮϬ9 _SɘwI 羵Nv B m74׺[xoqCֺZ=| ed[s!TvZwa]ұo3JZ9;ØҟtFxj5o5KGW xc\qH'? ׬d , 7hettО8YN(TZ2]z5Њ۶y;Hr0-O`,c igA bXHmϠ$ ,4@u![voV&閒adq `&TϽ* iGXF>`k7ԻA38=^G~l)g%,YHb$Db1qK a.hüVoߠ~>PmB(1qZz^,oy7?Ƅ5>{;8&-@u}_';vL-4ua6k ڜ&˽Xn2ʚ=+8'6AC)S%LTy9Ir.[#S#(rԦlR$ҁjRUmGy]ѣqQlŻsZ-ymss/\^)0^;@(z:x,k2RPEO }޶JQpMB%9Ch~p]467)WOv%$ջj4;Ď !TRf%DjYDܪ;ZO- 9ܲt`'ᥘ˲EJZib^ W2Ѷj(pZ;]( PC|=h|+(x! {-Cl(L[[y\,-<QvƔVFH (@pj ( Eiy|ޢz{v>! ca~t%Al髻a䫻}x b bg`5t2azi(% sCkކյbt[^>vxyEGs^rqXS^=Wޖիvƣ^t^.?1YlEF|;*ʩځu_c;=ՍDQ52˗heH1p<}ۡLz jYŷbu0ܴ+7=Akqb/p~,^\BG`-HKefܠ#xTh]~09O4A(|urIh/yǬ4J ? <8â@4O9B.4ڻBXQ)b>9PZzȺ7Oɱn^+lRT,ʲJJ$֗* O'^2EȪމ9 !Ǵ1/>84==X GIZ:y#n- Oqkd$vZׂ*[zQ$龮;oۢ9> MoqڝI jeZo}Q&_r#'DqZƀ;'qB6;ojm[. kq~EY8B) ,1դO$3Ժ3ј3`;Ԁ͂f ޶EFne^ Y3Q+!@-j.3F-w f]4B;u3-8F6Nׯ3mzs?D ys?eQ:ۨwlᙝ*?fp<} 6KA0pG|;=0H/OYaDŽcIϳΜzVo~FVU<];ݨR2/1Ike*PJ tun^JK)d"ƖF0w,i<0(}Dl 7satKf9׫{&yYL4,V׫zh(p1 ˲DV&fMR(W܈Q(wUi(KX0So#xp^9 #a  vK-c7Z5&.=Ĩ(b>acwϴ{LvgAnh3Ni8!/u ٲF&f0G)AvL`VHΜ6FjՊĂمT`X(gkKoTi, Cr,f Ec;of)OaHD 4.)(TuvQ骀b//L+` iʥP$ Kj^r#Od h,edaH[: rfzK)CcpRoLoߵy#+_Pp |X[$]tJF:J3UPQ1I=f%sL%+Ή!b(ehnTĔS17ܗڀBg$Buq`x^O J<LNZK4[7Qi@z'ɦ `UJV'U\~^^o`K̋eܨX] S2&Qb,G>F$# A Ų,V||.f., kutu'u>Л;BV"CH4.,U:0@F | tdA0ڈ%:`*N)d2Rv3e=Ύ($5|f v ͌mFа/n# ׽W`\G(,? L4aMaS^lDfQgQS{u]xZ;3oß:3$%z&Fm&svfgbn.i\8ړBCYsamxغx֌*i9XsV35;&ᳱ6tc0/p۫51O.w.!:kK~[O? ІHgVc dZn0\?lArcYIyԟ3Ks)mXSA>W('2z_ƫ=s<0ilp8;_Wӌs=n{1jiU| 1b^-{fafcH|9`bܽ2pUG[cK " k{x k XcD纂-T/RQJL 9oN%Z{/ @%`h$^gF tnZ `^Nѱqy': 8e}e6 G$Φ"( `˪@1ܡ>\S[?Y)Ōv ~X L 3{KElw{SVMVΦ#mcЎaRB  s7Ó %k'^F \J\@d?5 @Bm8Zkd4HOu^ƽ Z9ƺ9[D̆_˳ļZ~l87 !*<2iSr'+"p?՘<;7W(7"dBt@c^ke&|ǽlݴ]ȕ? 9J(#W_7+*~oNi^Ͳy)$*,onn.?xJNj_'5`1@?Eyqo^)ySM/x@X-~xЗ~"|<"RhUuRå%o(?׬"4\q#C党/A嬰Od`fc! ]}\=TCEf 57nСw2LJ?7BdJwERc%ܤ*OΗ{AA ga#: e˳'acJB%S8\R`"GZǽdnY#"e.(J~)b4uCO*.T@,}w(=ۺQ00P&ZҀ9Z2xaytgE%vTM : ӂ<QhK;P 3!U2{B%cCd0{HÄk@ߥMH-yRL+r]N9tX#q<8ڗst%Ju|VK'iu}yr]ҕQD>4魮w/B|u>~}g_vST%^X rMc`PXKY}'^ @LhZF+lsFOPQ lE|~y~6FImޟtImkB10&f  ]j{\-o?MQ҃*|= Z.e) >%/(8!Ԉb7_퍦OMC. ٧_4V1p=:3x#xV +dE卑%}tm> E|jոu{&K3/7 #|?Go_jOӿU<8^6Y֊gb{zuS\K{_q$o ȑ}L᫔H&sX%+g:ƎNH7V7x4΁vo&d{O_ltO=}6,wܼ;|ҵ@7wdHwTt7s~>6n;${ALIv7Рֻ^Z]{sx~VpBc_Vq}2X߽YMe5U=,'+VeBU/QZїڂ;[0^FWmuN#-'\z9Nxa)=WrB(5. z4֡B3T|:}DRxoޟt bL ;o4n{}(b)u!Mi5ۜ#ڌ3{k򾷥 }\O 4o[QYgk)'oe\B#ׄrZt qCZq.] O%nyaJS4ԆSV7>5Q|4LS W BrǨŁX֩Tb}%Je[u>Mx\I^qzD{) )i'RplN/g W$K/^%UW _,VLYJwAPͩӦA+r{:u@%q)^ yO_ImpFLԆ1^~ f'Aʪ 6w_bx ct;ƌs4 +͝w _۫iM7XåvәC&`9(;]"s-z6 E& eQ;* 7SLY;~,dLbf.U1ۋuq{WuϰDJp(hRdvjR-(FSlt0IM5.Ŷןq;k&^OGŹ@7yj0߱4Lc姳ndmbC}hg.6C x`t[E|5bTQ/ bD~5>KV~CS5_-)d/~Mu&<.OIJP]WEwy:xZ- oc;]mo8+A;l`>v {i/$*t,$mٖm"e)$[,=Y*VD ?df,͝%ε:yb'~0` -_O)_A;5K+{ҭ'~e^Z00ۏ,̓vM;:48\> Y7fkF h.kVq Dg}G훡bm"D h( q)c#>{3UbiodTd'eyCi.,?rhV7 ҏ}4*@Y%L4ëK>IqF[CP@)cE_3PPTže:g])(QȑPzO1]9cL) P"RuM90ң>vhnw73"._^FQ{HfszE񔠝#RDUںvM5ph$1F^=ơ<{CJ#qa +R9uJn.M>C^_yV|j\Ehl)ٔ,-0&&3$APH ܆ڱc[{0@`gH4!GsBf*?BGY^l9KQv 2ܻ {-!aP( Ћ0OlӮ? ^g(ChluN)Sz$큗ݖWrZ s|6 g|տ7gN'W]S?h?d)-z99~M/Otnߺݖ]n.qejdOD9Zs3.dAR d4uJ0h<3) axm~YSꧧ'4"-ΏfOWmxʬ,;9s~eףefOW{[lu5[~l=+_/yFഗ+Iy=t2ާ*q%E| *)]"W FU;!+} ]2@UO[I.dC>G6AҖ)u"@xo]-qvj<K=OGrT}+qf,ًRܱRM#x+i\1&K* ^;G֌Bw,A5A>tJt*-t:68Zo-2; ielWhxX͊Yg엃 +9)>%S5T z\v@(5TFB MYI R7 w xb}(pvU>]PR_ L?pƩqUm%UI5Uua!||>~gML'"IIsMKLu[//U/?b0P)"#܋JЈ 6S+Qb,™d%ڦV9˹I T@9Mr CAu5 6H|ߦq[d0skmB"Ήbh[V(D铢ں% W%E3aךzT{1tjy /jjcFPψfm' aA@6h$q$!A[`C'GAt>b/_n۷OӁ1 ]^<탡!uߛVx-|#D 3 H>)(w0HjĦAJIL+nOFafs h*eRkMȑjþfu*a!^T%hiD(|k5b5EQX[A=k0T%؇n{ `s8>4S!?cK\x} )uG%g$Ѻ4UL1EJ9:v[t'PMtATB{{Jϯ/pcu 5+wk$eiTn˥r[.rVJN##)ǔS#`Z ,'Ҭ,66BAh2.AQ. w՝Fm?{#Pyųl*pݽlyo_rc~ƛ_Q9qN'fh{9e rCr2mEA ?Hir76[!?6`%uH?ζ2ۢ"9ՇN .Q>8I%T$HN2k2̌-xQg)̈T. ܶN}6"$s&+S0l* j ՙ"$1dA9MEmƳB3+6F搦Z )B5km}L=j6JA+@Z})3sZYUen(#[,qr԰}^H\A@Ԉj#㑦Yde\vMWŏe9QUP"0ʊz:U'q."!PJwLcWv3`6TDp%n+ΓcFE=-On%ڸGHD][7EO>jF!N3o՛f E旪-r;;aNxx5_& )?{X<ubP :{q`av>i^[E&1^l XqiR')⚀c\;D#ЁG4c@Ī{o7%c?x -B hUz# 5C_ƲoFiNy@OlJAi+.5ťa)P,x5d7)M$"ꓕ~y䊅u~pOq`k*t1dě2dGDu+$nFG./])ӝ q 4]T῔9os출c-muUksIRdΤAk0)mB&`2WeLD)]vV9m?{**k&Nzuc`Krۥ'fPNRrp7ONBl%:Hyz7igd2 RvW*~DQVLԄ\4if Ĺ:$O14aJ眧²ؿ>C)CG۲.TK[:1%S}!ZB` + Nɓ8Px^Ç ^|9{?(&S?p 82/n|5@ 'U"h 0NϤY[ o #3 eiFeGqRjl0{}=2 ~-t[2\q98-/s#lz2!'L.YcO r)U+&{ɴK-L`c&) _Տq@l}^*"iթpȘ^x#JjWxɖ:)/1H0ה~F.hE%sItJs]vg9I1CԪeO EuM,bA#lL@`Ҟ"kgj+v܉to,_E{KQ DBVR"zL+FbUN.JvU#~9(x;)v*lզ6rk,i J4fDvfaT>pZ=&|f|V# ^6P3[RNےw[2 fKj;R*äR"&sk2:ɍJ<|3'ɊU3rUŘp4vb_/}F8{HUש!=* Ä,ISHYf(gʷۼ>W |޼&<ο˛w K rT<&n{`ГjU'^S yKlE[~P.ݓZ+PyͿ,v<}ɻ)s>-$"J]S>T26w⾕طUWU\@_/?j#DDtK-eFOZ\"S/%-/5;R}B @ADP /$,4 RQ,uP.֒;vdÁ˪ɛ"vpr2rM4ɋ]9# /2*88ppGddËc .i|ը7tΆpcp^ ȣC0m'wuT1i}J зe޿NWTG)qjSѮb5~:N^UtmrLxM[PU&L¤w0Jcs!) }䮼 Q !cɴ;dXI} E] C8 W,sQ^8'۲EvHY#";BpHd+<m<<',ÈIQ[8J)bs0I]ȥB| WT+ٛ|ʟULV:u6r/6>l$ oQ -^yt삭\6 S\'DYsF&(V/zi2Os :* `*0 ')i" eT)lGXg~Z'y6O|Jʸr.!h$<}Y؟e{#E@M4v 0AӮJ ?uj ak%cQQpi[@Qw7G0Ofx+!p7d4}|18%i-Wqr{O4߆V2!;JHĵG\R՗+V1VZ7[ >jC9<⊫s| )S"fGXۧX-D+XbaH3ٻ޶qmWࡳ J/zM,x\#:rfw(9؍\/q[hC"y:e>ҔlOבuVcac4Ch4_{nv>BtqZ_c97 R]gGIֿ+NQ#r4ڸ+T} 5&XY^,Z$%T *EYF.MWL\j)a V6+i1HPHPKṉdXr ָU+ZE@2óCpF!3Ŕ͘4,(?^ys6/sZ J7wE+$華PKZUw*I5RJVjXjt8_"s,SQE\e9#F<>HZ)C{/&} ľMb !:۳XRH|h»㳷*.qL!!VbS}Q+&dbS} k13?[Z%ӍB_ 9g r>c-R[{Gk⎘k+{WM "x"|Hm0U7x>ݠkvw.[ $(f<@ܧ q8`F𤒖/gzjXQL 0a ýȥ> ΋IvR$W?ܣ#rp44}e/bTS\L%F.e66ӿޚˁIndX7׷*FW]\TN$l&2r{@R IUNԸEe09eWw?2eF)@EȦ]A};|cƇ^{6=Bݝ/SdI1(Xxz; p|wa?|?<2*-A?Y}UN?>&~ @!_9.^Ut 3c89ԳJˣˣg8BU*/\Q₠^F.{P#$8hW<#Ľ3>O q \ VݗWzJ8hWMsZߪ.Q42W^?__4x82'q_}n9qY Wy$7dFhjFĚgGk,fS|@K9%2 ;0&UpnY+nYLcIC-pUV*=L?7/=/bp_`ګA_ Û^v_r/ѳD eHHRjxQGf.\<6#c~1)?h]<>u#OG/g^4e]W[ T_i8y8}[txgyuzǣ~*%oWlT%]bo˦LPh7<AXL~4=CwѼ~2 h_~t?_wUՅ@1GoRRU˾Z9Q?ohu 2%K5!V]5)IBW~>؀4-?$13Pk ¾9J^Uu;Ù .kϸvWpǥ8`Z+7ʣW\hgS<^Q?*\:?fr33-UV)?kYxeKx(3?Qj<g1;R(>q9l4GJ%fȇZSL ޿{YV2Ѹ[8K:Ju;:vqwMsJ,[Ni$ .}1HsK+13C_>_xr7pkl)ӗ#?0OQ@n7)YWxڸrN}ߜT{xGAcέ#yT9織9Wp]\\TupBu{ 8<'D.)TuW(xG%C+wsշ@SͩpDS"!=xQ-_zţ~K^>2o~urN5 zYjUkƅ?/"kʵZluOI0 ҆6ta!mxpQ8Ğ wyHM^i[7.J:i)i3x1 8rs2u"pg< yƝg|g ^չyՁU 'WY$ܻ\cmW$:"pu| truGA:: rS]qmC |"Ťs;s護N9^# #B2" 3+,F# NDc$l>ľ!_Kyc lKMCY^ 6Wws8Lp.F Ύp&B :@ yƫ\( nx'Of:8b&~X !-XMx""S(QN:QΎrrJf(H:BӌIJ2̔0 M!Dog~$Q5!jɏ0\Ypl!QkBcTXM%LKq D}"J4%QUb(P"PܦE9ؑPSJ J_x>,Uc*fG+Xα%;VԱm`f"S^Xt:?O0t:CE3EϸTBŠO&_ԹfEm4n˩zV+܁pK'X1 Jج$"H^UV <]r͕ iwN8Yb~oբͤd<*z0&P֑>(ʇ̂2I-<9 )1+q0 A 5}CS"gc, bPik 2Vi#њ#"4Sin?)m/B>tS>jFGW1zk{W*TF.ʙfjRNW^Nϟm]/{>G^={wwYTTTfEKUR_tH#p牙*t{W״7{YȈG6cƈ(A3u 7 ,żgQ#DSx"1RPFk݃ڕMu&R3Iϴ18#P[5mP. C%IԣA(8(^%(r%z-L`!QheF`QZJcz&3+0cG`CYX"gbd\vVa6% lyA>3ii]M .̚X Db4yo Iijq@5HĜu23{x/<q-Mӿ8/bL4*f)S˟!ڭXZJ%Sɦ&LwInBLߩi8E+8N QBX}*x*`(7 C$!K3.PkwՌVrW "3|+rSuq0|M>80B6n~|i[섅/uLܐNmj\iJR"7b]y KۼUU!zgq+g.OkWoeCyh+T֫Q;5Z޺/5xss o7#@F<{mdܦmj풯\PDvWU0^.Cy2Ϳje*Z2}0/K0lu8w8 TVȨ ]`N{7xoQ>{y(8oq,9F7vWZXv4whƱq 1A;/5س{T |/zhNuc]CF;lK%R'B$om)_u^`޼eDӊT4^/1ՐOtcm)sF)9ia& LH`KAkRbMkK@k\|r޾ZpLj(=O11:2h""rТ" 3pJG K-^2qZR"XjAUӈ5]}E%g7QS 0ز!3K* u3ÁAip@yKHq3&BH:$sacv&3 S_45KS_45kN}mتEWBi,ׇ`8)b*OD= 10::d(|b00odNTХ1תJr %ę)֎DXhKgXa$  B"JB%nAuqj02hDmPe̗ LtF,Ou>$eeGEU(سaO$顨3Uo@ryf? ɦ+`~z.0Q9$v%AmIw%o47_`@*> Xث(A/uPbJ>wr`Ѯc@DS`fz1䤴O$(\%:ED|c6f_Rxx$wj |s}GӻiGӓյd)G6`ͷe>yBp[|[SoYa%m^DHG}Q) `ǡ?ٱ"O-nC*-8%f`*G}8YN0쐦9`|lœuV&zYW.U]ϫ4d[ )'ƲעOO/kcVOL%е;~O}4ov8yO׮ߎ?qt\@>@ȼw6:r{^U3eD6=gW~˳lOsƳ\ۤX9_ [6Z [n2lV[mwےaj"3MAE/[3P@`bc9f}Q.x}g5}Hcwzi,ޓ ')2jk6eCg1yt0's10ћ#⻔AfNRYK2C?7>E:%ОEWK/I?a [nˊwS'WS{쏫?F$RW N8okuhKO'}%6e`I׃zW7ӻy^V)ES\J\3|vVsm ӆ;5I55{>s+~=26djOo?б1_ݩͩ2?eۧ+A)_p3b|e^Q[Pbm0=M|ewI+Ʒn߯&Pޭ:_:;9_-\A*ضu7'US6/͡<"H[WoqSjի*zߛ/ې&mh|vjuC8F/7bޛg7-=Xr{ϡ툔)zc5؁4]>t?^^<9)N)#6gX?D潇J /&YsMY'~,O(8~rpN;F8>L-JS]A!FF\xjɯB2fjvI-t_ߒ XF*f5#zZ <̵:e`ESHLR";i蒗$ C諄0u*Xn/L7WO)gVͬQsM?pM²+ L- Y'gwvc6y?7Grru}sGog3`;#D(Q{CۣwĠ6F£ѩ:ni1=Aes^k1rր^ZNihVy{v-Z'Iis6DQ H'`tg5HQ LV[If&[IڃjgjPՄ)h*u\,ְ lV[d{C1iƒJF1l ƖX*VwB>m<ةZj Z9Zq .֮U $j!".k!Puz] 3,Q-x`틊}AUʨu?Lo=tݮE XeRe\#/k7Mvx 6̕kRe`57)$Ζ1 =CQ&Iy|P-BmOvU_nmI-F<,R =Ou2I/~𤙟KR_ۃMZ'_ 2)\.p'i~ɨzm= 9H5VBٱUBe$1:ޏW.-kӧq ;3-8~_"RZm꒵jSj>me~ix)%_la8_7\vmre`L~o݉c0goo_8ݱC|r4;}{z=h!Yӄ?i5aj_WKo!$A0I{!hR2' "棓0CssMd  RY3E(&ڐ1@:騐kA .i^bJ Wi+&f.t?>l`*Ͳ5ű0VtFg7`It[E0x%Kfo9[B% ;Z$"I29+} xf8o|{}7W׼ 153foYmtl{dž@{œ]9ӘV8z"ɞә|js)2&ZDjg( N:gsRN{-'}ca FQ4gពGܣJ0;m(* 1J!rY?aҡ6 Eh@ q`$ ԑ<$-InDpIT}|!H,cV Tb@ Ю) {Sda2LFX)\52SO&zk#3ĠRtM JY~tlw>˂֚F}Jhأ&l0_>]j.X6r,=e3]7ٟ31s]\]~\9Q:ǩ tvo>;z_9= o^z?MWo=hVl{mOmeU7a~v@_iR 櫋`&%|훋Q<+#rbq"-!_ svMpu]\E 6GCiA/}B*2/X8KQɂ٪c_C) )/6[){4L3 Sq! c@j.s%vQwQEI?# bCqh  l zʦX|*Yq\tTV8{|>˚ʄ^0WlxXOuℏAVa&QX!g.:my`90#,ATC7%1$Y_R6 fb&@li'>*LʵGb$8tz \oŐ@U!=r 5 O9Rtw ;Ey.혬&# /c^PY^M5zdi`S"~dtNW`\:ژnS ^`:3BG;&[ !eg')pKLGr SAyL'ȷ6r#"̗k_ȧvs@&%H+JLXؖŪbU)^{>$%(DǜEv׃it;`F6rNh:-whtF<"ݾBFF9+R5miO=PGtMHr;5 JFX r-Tj NRF$;0H Z^H| (mTM-SVH_cChka)U#Q՗&vBz3i2DNۮJ.kj'nu]؊sQڇ'aeHވ(؏\qp߃N%ZJpSV)>=}->*&0<.nz:"FM)q1OS-4n()-H!Tl#tF|2mM$` q:ZS:x!=.ܼ΋_P K {HeJ [җ#Tp-QѸ`/&/,RmaIQ)a틷{Ph1==nCBuk 2RߗIib:S(Ww8/0k7njhC&8_r*JjU-ZG! G<-Ajf1e-$+S7YòXg{%֯iFɪaC`T"OC3uˬ;L]GWY* *]ۖÝʮRIG1NqCkvfL2.qŴAiRKa]I+v*䄕6jVꪢp朴AIԉ(ލ7)\d ]*GҸ,3y8o~FM֖e CTVP*JcA7>Z?UN6ûo=ZY# ǀ2J˵ڧ0De jI,I Y%+]-ެ6>%s])B?kAD58(v 0WK2HQjCXirI`X+xo%x%H"/0kl Py+gz(~lp=x~iEd5gLC0Oo ;>t-Fd1qZ_}Pc.R>^j k Xku{]ⵂK1>h6KͻqQE-bf71`s Ljrg{7w_ݎLpnZã3+!=5g#_>o7ݵåoFt;~ް7>Z=?&_'d |'uLt4}K63wbL*{ߨ?ݻ*&%p[J0=0#sl6rsKqABdԻDyH̐:,٘@ۉvLZcaBi^!U*E. *0S|΂bE-D_r/2UbMᵪbRhKu,V"kU a VcȐ4jE'-SOSX X7J&3I1;&$5}Xv?xi= -(?BPHY(Mg7>yzin.UKܪXI-a ++Wh*IAUE!Z"KVt9 IYLQiXRa];=kLyʋ6Lэ$@ 0e*B<Ϩ!a&@^;^q,"IUX*mDU!kr\b#c )fɗ,=!]Oi3Nf>PԻ԰B8B7N! `x5bǤEcYT}oIqA(wV;<2Fyqg|yJK-"D1Ao)%[#ȼXK<Ù\*VRTn/-wa ~=ELWW19ۖ $8 Y+zχ%ߡލ{%2vyBFkQ67߬Hn&/~ K0Ba"^%S:q?N? 8A u:%%%63#)Q)8r ^xʤ(5QWZI2-&ʕڅD űm鯏ibX TѰfi3™Qߟyڊ|z /)2Z#suL_`ǽ>MvB{Ɏ{4qÆю-D(a'aO.dt5l5^X"MH){=XZΦls# K(JXK9/{M|U +L0 n+k^iY<~]Tm4JZksB/gV] HR׆!tð jo d^]2RԑAZ4,>)| b͘5d2Rf̢?ey CXrDzxzŋc~5i_Ln^KGIaX]T[ri۩9{QmxIpF|96+czXXÐJ1mqp.)s a-/{#lƉ< M[kgY˭RHcګS0dH>c #枧V&SG̟!3Qw9̍Mp!xki6DW؆ԪQqjW7HZor8IO8}Rq&O<qT9hS5h9qal6DR:o瘈L5{~Tꌨf6V8dλhQ lTڑa'A52Y堌:P.d"J2&s_p^rwI/]5ؕ/[S;zEG5_- Nk+ zE> 3N_WZ֔L"REy*B5hXcW<b}Q^^\]B%oW#6B>&?Ȃ@e{EUɀI~R(I. AjxP{i :LmɈs9WQXZD)ip=lC1H//bl@Gn@ 3:5>L+c lm{A2|H'9E1iL\cX9 k p:mvә _(%| _X] 0N0A͈b];w&Bxi24>({zs#:4!F)=w1=-ܿr̒ʲ-1en_geEcaSohIR!>:Wᗕv P'7A  Q8LA"FU֗YYٙK"gnZbtwiHo7*{ Dx(i^d R`9'60'8EKx~mljf:v"+&#(9&:ZAa[+ CT$1{4h,yIҍSvYew@1%;FqY* $Kք@"qJdi%"H1IBG FH|yDQT*Ս SQiQR$'(pGc8S:XZ, !(e\9Ս(i<TS #0XtTafi66BmeXhg[6<Ć1=+euwASqgоoe,2mefg 0u3YΏ[dAX` [gGpe+^q ^.ǃ,Wuˬ[bY1pAҝِG6{|X(A(kZ֯Mo.y.z$[ C҅˜/}EtWo5$y{I_^}~X ;*YoE OJ)B ˵U F+JY33go+~/:*t,eLǻT+]noߦ0ybǨ]e/؛VQDtt<8oˁA84HMz$ū+X Rw'^I2Y*[-.vI.,*YxUdUM o\4xRH44;31eL*>U2}dTM'Rc8&,kVo|0d$THP5$K"$ZWWz$_RpZR5N2A*үDIɍZ=0Aϴ3('ڀ5VzFZ8p r$ 7EZxuSV E`edIJ2;+8H1m++,cNoRZf6f \Kl`PKhx`d88+S #֫`qXFvQHSD*Oe@"\e 21YH AINswQP`R&x ["A,: DË$ѯdq 6$TbWF~n}Ώlj1Aǿ@q] >W0on?ݬ"4g>Fa,z77t#|`6?FsuOnQIt3{O~9#*woL'^1Z ONZWAPhy(4\ۧe> ޔQ@-SM:yx0{耬ۛ:"e4Xf6Sn.zUj ZɃ;ZP2[E櫾p+EܚYw+,zYڑtk"wz};dRl5u웋^lw0qoݻ'Z QkB|tȏ}^IU+3oFMȮ`Ѱ/r9g윯7z>e.IYh!W,Mlȝ6@&p*gBBhl}#$YKwu㵵%d=|>R5ZbmN,Iߜ0CfVbk9ۛ]َdQ׈kvѭlΞ\M81Ǻ"\fXq@poΦPnCB"QӤ^QxmJKHh&$4uz!Ԃ@AAL["5׌i_]-O@wVEcFIkm"WD<8?&aΚ #?Xg~42Wu-׻q&"_RQ_߯g;/;q2 1db##N(X8FZK9p 'ܯg?n&VtwcUH7fu*%٠0gSO5ˀq3{5ۑ'veFIVC\*(^!.ӕ5٩VVhjjj)tJZMކYUʪMX%!"1"UFRhC DƥtK/8DI@/ RϣQkiŢRŀY#ew"I9d%)2`BBKƵEk%0Vp#0H2UD@1$)74(HYmR<im&%B"%/ q@  Nsq E@J"t';2ҎdH4੶qc%׊,h%<8`htI.:ڑb޹,-_!/k,Hn<О7Mh*0;@3+Id 8F䂤Fs-Ei8p"v\G*g^T hexLƤy΃39RE4Ü#GPPsD ? #`'R]U{QW &L&s s88,S)#UJbvQiRTFXc]%ij뜂ۋ^ouj9B}햣AkC|~XlP/"8SD{T Z٘| da]oiomT:5/4Z$`UBJ8ũ9e 0:ũ٣I଑Y a`/' J=L $(+yi1(y]rQ SEñ,+c.Z=jJ;M9!Qt!6:LaSlt،.4UC+D0pZULQd1Nӽ,^ןchoU:QYSEg[Pv:EDL/p݊9|$K@TLpi%h zڀGՑjCMxuDUB]j#%)^]┠6۫*$Mцv(~;`qpc#t~4pmWxz7M8֤kPu6.aߨƷ_ r(g2sH{iv@ JחO*B{ 9e;zFs:R59D%qzS/1]oGW} pCaXpK`~ك[^!)R3E*ađ8Uu=&S"?3'ҕHXx1˶@b2|Qd#rkG/ . NNzx9,ԔNxKJh5P$j.1/쉝G}"3j٘O#cJͣ]cNMV++>FC79P2\uՁg3=xRJo7XTC|![.ڔh<[:_,׊+P~).?gԿse68;kڢPIÊ U?9(ÂQΥ!< [̏ <ϙ63"C(nԷv轢D?ZPڠB̄UC`CYuZ*b Lc;!rEM$x -r2Q)?9W/,VZ_ÕW"!B+M?jye+(բM I%(!WV5SV%mnsO"$,=8Zlw= ?=!Є ;ᗝ֕IOBI>Z==93+ QbtFSC g*Abt۷V9RmyUf1`1'gqX+Y*cF`T%M[b5#``T"6 5|Ҥ1s=jl$3ɠ@B/=RQS0DBaDCK#/!lD9e fL~QNQbb0VPP0#4!4d<ғ0eyrPI8sz#^j$uZc +ԓF+K%,k6I%V6F8 Cb 8ȹr 2neb%Oz-@8=<Ú=q?y}*9Qծ6mL wj;}ŃYB4yR z@ 3,Fbf~i`Ny䙮w+'(S~Y>?{bq&i8 Vnvevz:x(V_O8ƨMms*ɢV#(h_FtB-쪝^ 껢3 6wЍQG6!Z V]plð 㝛bh/?cрՑaF\c vNOQ=HVRԓ Õ*b\^,pILkG;҆`-%.G`z:<]}lYT('ÿ8(JZr5E ɐ_?8"%R?i}+jpc999\fʾ^A_IkIѧ'}4dEdÔ aQ8glZ ) v* *0R '*..9XZRzw7a,&#dYuw0%/7~:FfHh\ku6N3m<극k U8<5k6**4=[\]ZcQ da~~PQUԇ>Uz;- bF9d<+,/%ub "ֈBS=Ĺ8[ uXY>{_1{Wo ݲz򓯽Og; _l/R(&K4,}a -rՆ3*ed(bu+#ʧWzϴn@d gF3q0?ЉR)QZll,cGҬChMʅx7-ngS4w0`g im+2۠6PpPT3a!k*17]t e֕^mP&bEgX"Ru!W_nS>OgH訄fS!\ÔU#o~WN=HЊp_6ȌB9eojtZ;>9:9{j6H IU(8.I$ؗiSh*Kܠ[Q WcJJ,}(XcP ^``f t竹TMHx6l50Vyg+C v;hs(@(L4asBiByg[cHm}E;hfg֣N@^}oŠI&:z/|qua4RKEJpDKk r# r)E,Q،O҉cxuugjL}[X2RAZ2"Z`KF"%4cNc|%؇,jDt ,CdƠjsX X.~6;p62F&Fu,1Fl4ȣFZ|饃@~aݗ ^01JɈN,\Ar`p?fوN8܌ Q5vn ^$LX$Lz8 ኌ p71a 'bVJkw!קMδOϼX•֞N=|>9^|>i]q|l^d,2ƒ՝4+" ѽ=&QQƙ!cp484N';8̸*\!R2$1j;~O,qL |Zxjټ`r'։?VNY<8ٌTp]`?{=ᮼ8E8CPm3{6$'*xp)AdymNVGaC;I3"#7;|Z#9mVkuȳtZI&LVcg)DovllkŔj8f),$gtw3 WZQ )ӵAWG5N[Zy-wZ5^UYw-[Cnj0NsU`RFsG 5QFii x;J;>X1[W8\|r}K-ZûG6Idu-H]MN5Nǵͫ㌇ wǓ d U69Vm !Lm]g 7}-4L V^7o'>_/>9n6=>1=_=zbHSE>bfE#xy敹_v1Hso ~ܪ:FX/ cYGcvmsOACH!g_DkUZ'jԫKfy%#$wvf2eJɤA Ӛdj.+ZkIHC㄀^br`#1֑( ;)y^1!`+uB:=B`'ՀO썇{6]h{#T&] !QZM넺 7ZZ -هD,إ6]/*ĿV7vА 1䲸ڶeݪpksU*]zk1~UEX `AdaGWYV 9-O9Nr9RŀA¦e!i n_y+/OM6mKQj%،}g;JX,rV !郓z=:l#͓͓ƻ| ҃{ՔN8,?u9^6c!='>P;I-M9t޸)!)y6cA # v5y*d 0GfpmtjblGIKtqizpnF' QQҧA0^+Ι<ǠN޶Ňͺ:X z0D'*1Ɲ$$y/ Bn4]mXӒiK-Y^%.7_8H !8+8U; u@y:r'zsz} O;>bp%kt2Sd![gՓ@wmedx*,;O93%۫eܯ6>5r cq;%- ȷpIB{<%s#9IԸRBF 9j( NvN?c޷7~Ǽ1.c޴,qʵ$Њ89F͘9A8'18@@;2v>㫃>cN7% HfSTN͸#G{Ⱦ]~ۧ1}( j5wHWIME{vM?^$/ lp#譹yQu]κ%7zh0w\;Zk28kcH?Vv S6 wDɇRfk)VZV&gv~қ3i*-fs%:eC5;aਇ:mTtPVdޢ :+:b~.{]˲u|!] h|Wܽ'o^ {O? JMG]ߤ&&TW6u??Oޢpcl>[^mz:MiSo,kU$*bյbZ:Y`'HPNP]JGBq.P W`KJXc6-(%#7<7<U6Qn+zl#ydBwY˺&mpX|3 2jJ <(^ !%YMY#eHS.]+VJF8dt FrmPT3r,.ȽQk.VOg\[7f hg<k~pٛCTjb;|dVs\!6T>}) fFXc [6Nt,)kKx#K^)0+NFTL ##^$ B gM] /N'K[;?BYVTq*FPp!})&Es؞%jRrR 0z jTkxI(i臄N?)rt?h&p#v C2DP0ȉJ:F}%9U(5^/j rt׌w1 ( cEF:OX4C`=Z?IKBHNL?Jn:>y]P&,.YG! z"Y!fڌHuȞhvV&^7_u֫kvo xzݛxm^hO:_#Kw?ߣE{B|i rNt!?&{(ۉ:d=W?l xvF+AVg!ⵚiL+N 2@ t 1dlln>;FDȮCd .[(HJDo:ng\F܈^+b7@ԃ9kUjL勇Uw/Uh {+m*|C+1jLn).]n˛x.i*4.*#HJ'ǕH? M X:%2dj~y1LǻJ0ZkZ 1sp٤izH& ۪]P.%g\z*0Ldg%C 'DSivC*oOjeWa:5\^~ 97y-+G[2Jd&{z6U\r8SbjVoC|MZ@ ?Ѣu}dS.5eiM=x)nQ?n[#6iW5ߙ(Ŏ1>jr^s[@ f qUW;w"v?D_̭qw^NقſNֽ}? ?-<gzWh |Ž _l,P$|Dl{'*Տ[}H9=n!= >l^} YFzngƫag4s 2kOOXw#׊խWfq+MXy'"YNSP8RQE):#@4hEZ09G;9cI4.֟y}ײ\>oa~[Qy%hw]$QBf Nh*lPX KzAwmrN -"|7oiYw J* ϦXVf]"iP$aʑZpMȧn&VD(I0f tw|=|fZzcb|[GPT& .9jޒB)!&I# } xZ.swfx Qc1BSe"MZ)}@Bs Y& 79I{*#OE<6ޠmĈJ+P'%e)rT: L<I?F/(xdz`2PTzxNB=SqK#hGW ~1gT鵷(oJ"o# 4G(hTh c n;b煥Фɧ40LB_~y[v]熾6\ *m%{h) 7_.Z%/:oD~,ܑ{x[>Yw*/>xQi^ W)RoMgwW$x*CeF`H" 4 q Au&6R5?A?x7(7ݨSMFo팥4`ߖr_6I!.]T^PAY&B^xCj:x<܄duPko[O$O ivǨBkSŚo=zfHwnjO_g?ygacM]G,Yn sPg\OJ~z'ӱy3YⓝhUGi ?Ym|@0;j%}򪹢Nx#؈p؆Qj/mR_c-W0qZFCeF1 k.s: '$ɳ\|\pY"K=?>>sPYQLu `H+XN\s*pDEN.( J$alF$ "5 1Kq5vY.ҜS'hajbsZ֒p7p2Nj }N(I9-\>暓I]y=B ስ-\LS$!-b;A{X P#AseǸ9#3x7F)PЩ]}俴w%G Qc"iH-3"K1q|o_Y>~m'_\\ȡ{P;,r/މYZrxj CdR7 ic]$d+jk$:Z w߳D~M֎pLBa??5GkSzf3m7%%v",]G @[G_ MHv`)Y/N>Lx2 pn%Y$S(^d|prd~^$Q㞓^EVLJRDhU1 {sBo/9uBKIVj4 '_y:rm%\HKR&ktx NesOku#'y"[ %rn8L%(ܦ| jbIRj Z_n(N j4ГJu:;L#mΔu$a(t=)vss.)X ֝\O9HPt́rAFyg ddmp mK=yӚXݴ釆 uCÅkܓ1VӚ*|'+9ļ< ƼǷ9=WجnfYá1!rnq=&B=۞Û8x)Gca֤p(=zoD>JS@:;05MͩMVxC琬J`Dl#!98rzqkMtʰOxPٶ 9:}TЭ@ЅmZzNm}8:rRfP/߻1}1b H! 2π%ZqfCTw4?$Jg`PN,ܨ.;3uu>d;\n_L^I&/eZG]T]C*d30@(0ehȁA ܯbnݖ6 ǤQ%R.<Đ2V#6Î\ۺh%#_ͿtljVWCz,ɰ=e) h]Y*.T\_DmY*|öf7Y[{j՗*҃vTR2J/{ W_j|!Tw6SۏɕBC$'noÃgՍngbܟ^n_~]LUaɋɋsN^kϝs뼿_x(e]ݣ{W_RZz؅`u-S Z"*ogG"s7dۀ^`?lf2H4Z>{[q$DIHn8P؅${ @ c.I*Y{d|";tsBݬEެXbñ-W-) ~o}F vBNd<{ϩ2k"BKLd6vȌBO(! AgnmsOVLpqoԡ;qAAOxs˶'".QO$E!%D~J S86cpRdUiŝw^`!,Q6xt(FIkc۶.5n{cc8ՇQ4Ya<2( wɈT0@J5iX\]u>ň8ҎnOZx 'FtyHçi|ҚPKAKÑ-i8T7Br@=}r3%üc&m7dl%6e} [87ӑ<BhZoT9}y^۰~>xofw)u^kE{Ѿp{.1*륳V4,1I)ALE&fƳ: /B9u.g) 1ܖ’i>c etkAҝfǰ(Jo#Qh#FEQy5( xXr3_=hZ(+Yr~Z0]MCQv.O59f$ pE{}?_/^':w=hֈ{$;O% _(>:* ng,Px%6H+5K46'w4z-Ժ(R3ڋ"k`C1+;S3c%{P\5AQ>vxpŐ5,3|Gny<Z;Iۖ҃HӄZvxՃ^ţísT#kŋ!zFeVgHXu_Z<`-OFv|zȜmo^h<:Mb#Z\YbH*dOL־xSS%R +'.sq2y+39ߘ5Yn E,Skkn嗭ڸ_RNvR$Or h+mKjIv %QHukR3v.+Jb]]{cW'R0[1u&#~A,q(Knam a;N0gDX跶qPH*jx83k.+Y8m~4 XJDmjFH*iCRYi<~kd!IpM|c=nƝAC[+&U{Z@5~M Q =ϯ{<hCk!."B ?6Ɛ\R<< Gz/>L~Wm~^fxs+Z"Z-/Sbp͔ʭ, S2uY^o?;_bGL(؇rl#sn 5ɹ@T*ar8$sA!N5L"XLr̡  I? Q.!%D=rBTY?8 (LnDٳ=SAin=4Wy*Ha:RF~~(H?B?|㸜QUa EPBAFi).<=|C|Oy!Ĥ&YF=4Pܚ9dE)fYkKz"fO)a aopIIZ8j>HCκ5 ~=ۅCt:m&C]BXD7[}Q(h~?y ]$",x{dySQe@XJ .#*@s\͙vm$ga#b CDrj+m6>!a.p>:21*hq 29b/dtdt7o4>#p)/|c#ӧ9_}z?C % K牲g$ˀtgq1,YF Ț4L[ӈ%̕Y&Ra*SR2 $&5W>U)moP!FB )ozʦ:wb H{KjjtjߦR9 1k$d.)”cMȵű:~5G#[u+s|pKAa엸oϳ J,Z)_ΰvWk4ɱwB9i*d:H]I&/7H7n9Jy/o]:Tyu.*,kC\?sV\6`~}.S{y61b@G%a~KKR]G 7fQQICr )Nunz\B1QoYz40vwu!_6)<aXKTc ?11C"ZʪqW ˢKw-Fㅙq+=X>]J=N^Ș-YO݌)g>M_i/'Ecf\MѢ躰י.b1}wNp)LxrZ盅>T=amJ93-KMK&y&ٟC@%^Vrxanx| w Slk @HN/2s5pӒJsfqԂ@/5~7"vQ:0 XM7ދ ɎIŘbfgEEijC(A1S8_ ́Ia:̦OfVڅsVߟ޿瑺_}Y^շ_9x|GU&p'vwG5 P^Fl'h̅IM:ձUhOv)5jُBf#؜:ќK~dT!61N%βGg$GmwuP;l*+>GƐ`r&rbwsc7bvL;lKSVxCe'axA8YBhqEzQ%:M5M܅^DW듥X{9VaOyOՃ>!i|j0;xz?zy+þk1A)}FތOwnT Xm~fxmZ֓¬YfvVp"(x>d@]0BY'L?,zMtWn"enntm X.b7~ # ]J'ˎr`@.E$ØT)2: 6aXbNer`P`], RpI98-rZn<惃 %bg&ĤfJ*3a0T&TY4N0xt!2A嘛vZsDKDRU ˬA@bkyִsqX󁓃!pGUL[Qq}/:jڭa oK1[n_݄;)zÇ:{,w2>lG}JdmzPjL P[-zvN/mWü(Ғ b@R'Ëy&iͷ?k|>zؽh GfKBs^QP*(*eFfb\v^Q*^ی.>_jL~i@uA3+(9C"PWJbHKOCDM*wB MzJE\L7o>6V͈u@tb#%ԙ]e8!)C)F 66xwtn**ݢǘ]9*y8N!ݒb,)eKrC=BX| [)BHrx[6s97XI/ ΠlXK= yE#\ɶ>|SpzOLl-H"7J&|a?,($ƘGIX!͟^LZ{7ߺஹdNJ0|w?-׊¤ⶒ#D6v˶Id Q&d>MV1 @n3c> *t9f݃6%=Khej/f6t ɳoEg(DH8(!Ԍ$g,.N:> Mx|jl&l"uGT^tľP2$` wGF.^臚y̿j p!ˋmKBw]1ikq>m,_l$d6<'<ӒdgLLG(a{=&vs9m05H4]ΕYlALޞU\ v؟RHBY/ MN$QpUbLeJCi HW;v.m>/";, dEOfml {w[j6c3֔0sut.4O}3 \Ȯ~ܣs&4fiWZ! :::< ֙].m&0x%[6.iOfuO8=Sc!"#~=NNgvKQB1^O!g|1ә8(}/{mN-mX?W^M&0CG֣qV~ﯤ$nܸޚRKx΅Lޡ/,tbKG2Α4/ixaGN-M_#X 5IϋC Xa$_S؎ś#54pgϏH #גs&E?Nίk!v&_]@;3&XR_ߒE`;#.Bc,@ڌ'/YBr{ie'6ҍ6%-vδMj(P@1ZN1mwٙrI+ig[hm%m% N638+.͜ )r-<2I#{:EvT|_uK8oI\|@-+cDYIkEN'qo-z?13}[;,30g}MX W #\Ckg51VkTR-l_mpQ^(/ڍ^CgBbBD$vD=7M=~-_"laVDo }}s KŹbqy,윰}kkn8ŗs|sS,ŕ*v◤Ps[ 2HJ ]`ADi|=K{{1+g{17 VSwO |5QY&8@quҳ^إ7f:E!Jt(Y-68hi$xeן/}C-]~Yr'tf됟JuTXhJtFA 4s**}X#}6/VpI qB?+e= *@Jc{&w{yOLN2U 1zLO 'K^X]8&haB,jj-\]@sLl-U{s;ά<ێ- @pQO,跌wE0O.z5Y{emtdy瑨3_Y)X< hklv;}ûݱ'[w5*CY\O<H2R,/5[pY1sLW;ӻ% FUs{}a}كtX3x\շ|g_E2)&b Db?QOЕݒE/$%5 0@2ߩ ! -RqBzXsZ j,UP+ų"a{O]LGgH]wS ] 7 ǑS^-rT7fL~ZE>d5-cMN!'{: 3g!lx}S?֌zxCWQ/4u^ZM-XWXSufz (h!-D\sL=14˴ Gl}&_\yz#n(.->x·?^2ZWT¡y9xd{.Ty^J6̉dTךM eQb+mx<*2}hxu"i:**i-[u5(qPReZ8@8)Ork8Ĥ-4;˜% 8/u]3$bЁWdp!xKIժZ]B)5ҌS'e蔁QGp5^тi/q{jlNV]Cl\V_N&u2$MKYp?i]2&MYr^dP1bBL|5|Ŝ5ec]ܡ0*lڮZzq.PaC:1YGvFM\q>ڰ&KIrܞ2\;o&kh6ޝ# WL_kZg̾;q7p\ ACh7 U@PK@8Hi$'Ŧl]]QJt|>5! gH(iD#'ЛSTȝ N#I,h%9HA a jUC x:h!M}lO7DN]=6깺77w񦘺7*z.U3ydZOl~gǮ$E+w!bB.)z?g{B2/Nh/2e_~A=wg0ٗi6RDbyKzXxCrYiMk]%E1:zyJ-=-yF:+k.:+iFp wJ9kތl=oHk<݅m}&Һ M 6:5*D: Juk=(֍b>q7]]ͦ3]cд8_"}ûz;\( O2[SJѥ4=(柣XtJƕ蓝x]߯k2oߦA>>X8l+=8&+rl1şy!HJoί g*JZђ`pBIs\1Ipekѻ$Oޏ|sTGMz=efgQ+n70pYl>h!Vt>0:4T=JI< d^3nϵ|6&qXVڪh ,l.q.@LQ E;$]<ʛѕKDUܳo϶ \m$ SoMv4;/DBj:+δ6GYҸdT1GKT*Y@VXCAJpR^FUֺLA&G}2.>;)K-5*HCh^ncRIE 2W cуuYT/AC.h|~0KO[xbT9K3O" ݊_%xկߟ~x7_GQgo^-]E t)U@WQ t,-CB,􋯼M^볦?Gu퀖ٛ*xA U&(֦~Wu .FEKP01EjU 9%[e#)h(]rH HxIBAJmxj >R nₓ^z^7:uȓ:;˩y߈7Nj' ~3ԡSD }:!Ys =(9i-@wOw,$)FWsFrƩҪJGC+a0MkA #A#N,T('㊇և~._B4WJy{l|C%oꈇ6-/.Q7QkŸVOy5dWيC;o_ۚJFe!Ae)<ȱ㞎# [- *"RJ`SD@ĵ_rmJG$N@ >R-}vHӁ4XHۻuߖz} VXp:mܾ8RP\Sctip"+'HPRyf6sU k| RNDi!/kg4j^s?߸8 n/:ip>훼H aţRE_ ^0%J,*֕ڥD31Kme)55B|C@`:G0aX{ЭūPf1\k&Uk s^#֘s%ghiRjel"Q9 AYT`BJQ([:TE6Uṉ%"D(lP yPQIyjDB $&RxVHO->/:A#L$kFMr^SbyZkDHԅ`*ӥ%ƱhUYD)8maEXu- "FE2dU(D4Ԑm5DNb@BeG}qpǸ{#$[ULGۅ_c~\~ Kc._?a:w7qүhk|a У?5ۯߝEt3On/.0sb{Dg.t{K*3W8|;dKJM _?=|'M;w~~ hxh긢k.BxxOog;N|g;N|*mÝ=;%!i}0Z ң.wZqkbρ|z5a i~H^&c}8X=FMEb+/t u4|2J3Ţaʬ#@dK\ KWsPT6O)|59Ηkhz3 ZmC4G9%W ZZIx;NܗF 2,eVf*c[-"ա@23)"jӤHѵҙq k%!!޶+9cln:k q2Vst}҆|q_4?W%K[8ZQ{ǂ6K%2N ^2TlSX$/j$K9KH4]J`]/8fȿL>.)n)n\ͳF.|PZIG`*LhU-medG4ΙH]ӁE)@N2->s1Kajt+m*QtECHP-@UZOE-EvBlZ/}fݏ{eYmܽ/"hr^؜{ќE+4Cffi"w¡Ec8cCH;t ƭ:takƥnOD>k\ -p\mငDjqp9row"&JN(@J`J)F K| Ox ܒH\wE+]Q);\#pl)p&Q\@\0Z ZPRy9Bp A:J\CWgE9w09^^fzo"5'G|K9´7c-{"ўHE4k7$ m2_9NxV` _NOV7OQ~lB3cO VQzmp IǁL$ P[rEV3B|iAK7,<xDҹ[$]EchݣJTs* >0ft7هGڄۈ$Z[㈞'hq=Cym̍|j ázB2dNFn@ƙ/"`7'c|#3ox⬵H hq2`8aCWvT24-:Q͕ߜ9K)~,Y雳zX6YU75yFYMY&kS~.mZAZtciڲ|4ň1id'lF"S2]YsƲ+,=ޜUvRD*/']*, E\8n C AR p8AWI-ar@`Cicbfoz)xBVk!,c#fH *7͗Mf5CHǼNdZ"U@UI*%Rg Gȓ \O &U" 4Th"JD_HD#0$ ".`_@x#S!)VD*AYj'oح_Xrj]R:Gj"u:gS68%_l2 ܕ V6RQ E+)y? +lF2k#D̴0U1\DFbe @(D2P3X ?4Z#|V`q=Fw(*cdv}Ll/ `x0*QTpj4UZ/* !'T;f[Y灧{8D#! MLCpgб xEb '1hFT ⫖mT^&la#Jz)kv͋{wJl{6~bgMD`"VZ,!u mImx=> ` ґP:rQ(=^\>[ 0AWSWptN.fr>׮߸o41N$ ٣q'[95%^쬰A2 -0{ӴK.,a Pb=od&X;ٗ)i#JcL8(1X.A( M"& a DXDZĺu]I,`'JVW HgJp n$ ql ,o 9 x$)![5'=@8L-IÑ{%y)pjc)G(=bM"'-uM (}w`Fp xQtIؕDv \0x?)Q-CP|ݫղsx+}oD8b^(4bac$;OHz`G ȧƞaX{*<}b|lqcN Z'>[i/t}=ޠK9qkiۢKtM6)hY]:ЗS g* qȃM{@ΐvW .쟁9oGKun -4Xڍ=0h(ֶbڂO̠u۾ c)3wz߲SrtC+2[J4/O!BhhzYрIA5/ 鶴VV}>hǤ_Et;6 5iCwAR&8S]kE`^r0H Kyl3H.Z*nJ)ѕNWKDV)O0Ra)D0xBd_#.ɋett⇃ybVf/򃑽_!C_ď%_xfJȜ /?.$:m Rs $ŵ8rZ-zUX61&5~@ȼPZSIr2vt"8:8!NOe' {[2‘ɾy r+4u9slW4㳚3%.hAɩ HnNVY9)W9?΍>;xPiq {ʩ9́"Hc*Cz^ <<\^:CNPûͤ:'#t)N[Ev__%XBK͏< r1&*_=w,O8ۭv{ed4uҞA1$BBXVт3b)3nyDbRHDEh)sn94l2Z/;xpXmBݱoa]8nv݉BxksQXݸXD RLxqLv[VW78#rXI#ʼn].ݵ9zѭoUg2BR)YyTJP>VJh&dJ+.ķidu$l<5,:OBi,2'g^R.JiC|D>(2b̏P0 M9\ٲ(4>.s4GMlOt)#ب8B!g2"ǡyľR<$%E!e7,zvCZvS,'N;N`Dx#i_&%R`prEl8ә]xF/+ fjhR;AnR铚C/DRtښ0CH6D:C6qo2 2}smN#=}x)ZaCȞ-\aף!7PόMC~tɻ@FDWQWb٥"dKƫ${J~@ k/ N U]dl[kF%dY $=)sMr2U<)o;)bpNpp[ClS"48=FGQڢ&Cr`U`-g]}yQP /66'֡|C<:V2%ϘgTfXJ֔ǁ%B&qs{x.کΜͥdÉ伖69GA3 hl5 ˼B n"',yH- A ``K7/w452v8G[|P0 Yø#zxHѾd&<cbO(@Qm-q-yƢ O0I4cg.~Wǯ..zXS7uɧ0饸KwIߒoF _Ύwr?>g1΃mf\Ԫvhi|Lʢ~ioz:4߾ zzܛF,eE?"޷$]Me"2:Z`&ェ#ia҉2*_4wS<|ԨTC0{AS?8MSWkb^pg:oJ}+V8:/ʐNMU~WzwhnKb~ [w'~; 3R촒j T%p'p_]?HZ CT0?˯h.{=HFy]t-=OanitJ2No`F*ey>Mt 򴫟{a7”5oI;O4Ј0xn]7p@z1sF^PߋgOd^?(Yg4,8hs&.\x~>I3e6|*| ' ΂1,axq9Qr7Fe^*dESV uT4h@(h =?2DEzsDQQNMfTQph{JŕF"Wz7M[6u9&QG ^z.!d+SFߡ3 gzӭ .s K9.uˎԋL[Jo7_ALLY/e1F0L3}ʂWLLie#&Xh+\"yDÙlp"yl_+đk+a)ZEi£+08M.O0n XNxX>]o%Lh`AOHz`%#keЗaq,}H(fS," HSB$ 9&P m£{V:eW(XJ"=!᱈ϧ4P1X,byTL@HC, DKÔ 9:ia( ]D0aZ(bs6#Lua 垅T}i`r$@.Λ䰠˯3gݚ(XVqX:/K**v|W@ԛv9WLݚT0iuǂя`~z`1Қ\< ~n ⧱ OWMY<]F^k%z=%'W$SC%rݛ AG^wog!&qhC@4حv{E:i1$J4'86K96~) /TVΡuu);Fz 7/F8`!\'ݗ͗-f:/m&R[ rct.-!gl& ,iuаmrx؅g䠬k[;,o;Ĝ7kʰ?vj'<֐Byػ߶%> ֆ4(L(9uo%Z'%ݙAD2o5Rd篲ClRL\>3g4r&Ԝ3+s ,2/M4MDi O4M2Jo4uzJN|&́h-c~Dqynejkdӥ!N+S{]rZlW~GqmVMn[_v #FBPCLnm`L])OpT|*[eAk`]k3 *[Qo.4̍F5}@Pn-+U{msi6Xu&fE%\JE8QVf 緒5$=8k'v/+IivbCT_MRXT%2RD@V}v[k"揮71n[]%;( E{t`͝)5Yv,h?TLJ%UCELolՕѮc3T fT:FOff4r˹Z:ez4}G+Z]'ˊr_NWWWQCj ѧTJ"W/Iv]4tOEIk YQh s-mԂyT3ْr, pUrWwsNe3pUZ:S$k Rta-IP5y`DtVfOsT6{JDƤRUgUZY<yEu#;T8TmWȱTLHGU,/* 7..&/.Dp dEХ@yʥD MdťEKʻV_[\y_BJ ]淋U1. dJL~]L ^^¯Ȩ8+5oWF¿5@'G\\^]l#5 r^r[rjcKsrR/ d x&'z&Ee_. IiaO?|PwghG26$O67Lgbx΃fʄFGkim^ݼSfaq`֩Ge2Xf2zÏF*v~ID2U|N܏XA Em~4srdQR+)d6Wi+W3c L .TD6DΚH-RUnU2rVEs"S+sRS8xXlSnL|h2IPK31t]la5t̄5yj,OkjrHH %Ji.?ܫh?Z<K: ЁrqyҧV ˔ +h3iB2^X 'მKbNפАH75$ KEvfa+EzPFŔdӨk'/r j "Ǐ B>}vrGywVbKJ13Ys]Re0g|BsYЅhSRa8+/=G6ʙ(a o+0$teojXu<韜O"6uDT"@`Kp׳4|@v6Sfu mEc[Cw99 SdپF fIظ5Pea!aH*J)hM $a_3Mǚ2C[WОՋR׈*dgL,Oq\ p.?--#cR&nVqȔl] T % z؃EH# (c)n4Wh!*kq&)juqV6e[eQE/y̿s&bE% HԿ#0eE5 7greYt5'"W,|8rڕn 8<5[vPGJHM:Sf]N Z:">at8ZW=ekgКZS(\g*ڵvwzI1'Suܡ<;¢v &ܡ)`峝pI,wXEys)Ay-^ '>xYNAZNv?s[}2LWM#UqX Wr80ð= |;MǪ5NFY$DaWp):ݛ|zjr^kZϯE^h?mηkǶڼ{g@6D3of]5 \\`_EjRf.ndD};-nL/s>{9LJ;PY ,%)[ozno RLqtIWEXF6LF;sk| r[3ܶreR-8|='T5Bg[d=di͟B,?N.yŶcrlZDMUn;(vdܶŸ.")v(/>pKY1v#t&JPymx1C$'(DٳP0^BN F[ C?!|F_ jTz0?nܘմ&g\c]ig/ȱNMo1̯YJuf )\nt- U77`$ak\um4.%4%_WX:Hdbt,y1!l^-٫ux5I*⤙HwO;J6(~7ɘNMs|`3xen8Jϵ SvkdJbCR"70%gvTgTajd9޶Lp8j8snUgmmݣIԴͧ8:X3ߓX % O9=ⴄ|c`4T]۳o/lIL_=]&Ѿ'sc(]<ؔEPP86?D.= .QQLsުAͤNMQ;F4빘0-/׿%&N.4'ۨaSiN;+٠`@{ĸT2ST`Mbw1 ZGt0=@fo-G4Yߛ47e@S{ A PB@,hrw -`!&s\! Cz\"0$cTlL˴K ziMaSq=FgW4,O~cHb{bkǷ?kcq :R٥@[_{DK=\$!-,W#f5_|l2b UBaӁ)DG42z,㟍 Dr%.i8ؼh3}^2o?̂ouI+Ԯ$j}݂U?yr-U :~pr>vVgo?{=>"f(lÚkLpYz'7~)i>r?f%"w}uu~7wonoNOOvA$^]ϯ/\=ϯޝ~t'*aխkU_~ dU%ǥ_o}yߘ"ée+X>axi֊"BZ0ux60GH*g`_t}d I+F]@};fv&z~8=G}aW9OGdr| >l;}( 𸏨JJ ađJ@2̕i# }ߡ_ZdZb}e|oX&aoHŅ/VH̸Kf`-Pz`{pW؁JO_O \q@qϕ[}uzd8,L0k\K Yq]O@Lsk;6X"=szRil3 zw}>u!7ARw8n=fiE-,QhIϳHy8p:`:K l6G9B70#%LU!@J@窠7+S hd;)e ;)·rtv&< Zvx*E8rНP($T+G֫$B}|JQ̨ˣ;?{_䔰=}>qξ#-GgAm_om G}m=9>7UYXqB3K\ EO:$eLbjܯ}߄Y5(jg߿_tܲ,{S\;KRܴ{)7Q >XxAF ,AF^Yd{(Ŵ{%^=ҽ>۶Б`z^̗S!D iP?{WȍJ_RehtW*:t)^wJZ%> RFP )ѢZӍ jJFgb'?i <8J.6Eƾ|}PkۇG 7=QpzNko).9d[QvbBIe4IBf^ {/ocmmT965 >pyr'P9QY‰@qguE)Qt4h.SZ%9="i$BQPA[ik5Hc#_bf5Lמ>jݹϱerk.N5'zIl$Yb\R `TC"gV7Eؗ؇9L: 4KL챩cSǦfM;{SboCVk^5C?}9Wşzk0V2)' nB4ez 7c8p-&MM6l4dׂc,6>Xb*xmH%fWRdr=5rg䄅u!p9!~56Wn!6V#y6ԚgSkMkg]h" (Lg|^KTE昉P脖81qľvm >rHp[x2'-2:ٳ%q֪ONi:^/: ˙ڙe_7j|EvQ]1̑?_~ .?xwݷǎ%%JѢU'&Yϗ$ܟ>Oụ3;UX)r& _ r(Lye<%0Gw(Ǫ_k5&'g%rafu=]cV3ME,l'c5]1t6( PQ3$uĔBQOv iIho 'Ș zCM y,0,5c"vuA9l$?^1Y6ek`.^8|G7M1Z4/79%1۠!Xȃ#~w*Di: 2v護*:tkS(C)q_YKV3kAO8JwnF 7oC5}5s{evN9| 56n A( +Z "(3ʬ4xE ʼnLǣT:ӛo.R @Yyb%-wNr{aѹ[5s5U׽aA@h%h1S46ؒm ʈ$gkG*NIn6+iSBdxSd_9j8("biF k]g'L_>r@#uxZIidH!fye*C ASkߒ`S:qfiKZX` yfڳ(kCӠZE-'lguYTv'XMe'v8 iL[`):pW2oMV!j}<5S/۽d̜gm<=s¼nנ r<'o]].jEK3yLcv[$ZUgLh-y '.X1fZF~YF5UAP:hjڰ$\re8Xλ:WO=a'P6+̐ k2p휰޸vnנ2Z?Gژ+-9 31V^A q,-p47(\B13=AU8![gU:ByjIu? A3~L$rrS]`m(^ɐDZӬ'q!tCG`Ku;i\lH䀳 lc{)jr(Ly@r3K=>(8!~2a؂M ,UNA$N$DfrH23k*u3$XUnɇu/!m#Cm:i::+yDkHbJa:j Det.sf<qt]5(S"fӤFLWtLwcgl {U AYX-u;H1#LWc&$)-xǙA0ص5$$,e"LYZEA9O:K EEj77H^bP,/m1NV/t7IV!nM Ο]#`ĶkPb$3aogxIg#K#8ݰbjm.E#Wl=0v ʣшX]nڪh-痴]I]ԫZ5L6,J71('<Č~mG?{3DʗK^O%fe v~ޭN/Hvq}VY[V ki+>(D;FtKNu{aorj0CoiɑrɼW@<vSW^z~t}[3-+-Sq txϦjvrlcvg:P9h;MkGhUȷqb.;ݹ";dxJ $U!۟ÃjT5|::q.L[BG(~U`q[Xh~ L,NV4cj;w9:%)bm6QB.սɶn.xEڄ,mޕsAvNxRi}LDm Ma[(PsBkDA 8|6TJP1(c](*Fi[ z\M>Ieg{P݌r}=ۍr6\j0Ak˅2-_2Ub]rv͗/EzRS"~&Ƈp/ow+&#;pสhV݁;[uW(u(|9~8)۪vb]Lj_C% EgތӃ]~kk>@~z L% ICʘ7䘜ۼ^e-VK)łENv/\+n& ɫR:pqz5\\jDǺ)%@1tqZ`&`zsmEN58fQZxjWux°(|_zΡ)YFEM5ޱ] p+OLւGWda-<וe]lCtzbzpFZ51mZOsyp'HP{zq _ Ӷ7y7#ȳ26&)(t,|{miϰvyBssvjXz_gu}+S* ^k/?{ ;y6Ǟ!df}J=Ur|c%9Z]@0M<nG.O $^n0k&`}WiəWf#d-DIH:bOZu:{a4r,bкL*aD=aDJ4:hXA]'W Y[/2 8+EѸcݕOBRm(Np!\1lǭCsg rȟp*$"#6 2Y!VԒ0O:zl?{P25:ry7iJe@uH[geq0bok-lЩ/t~5HHZڛ pD՟-8G$~֟]ݞ {_,xU*=jj Oi#&&r}e4־1jظYsݜl3aFi=X6Z! _ުƢȍT%U[t+c'3$녝roMC g5E+vUv6c7˳ш١FhVT>D}r@P߽&?[W{i'o~۸Vfwձ$nW絡@&Uն&!)?ڱ Իx<+1y V @V72\ #b̗8p-;7i ƚ-L_emI-Vb`&q,TXU,VE'=񆁂N!p-Pa-ъK0&!Ѧ4fx&T.V[HZHu9͑Jp{"# P>'xVxs g6zL^ um 7Lg T v*mQ21 9CLJ6Bќ~z'+EΥN5[{,qC >HȎnv) ;ʮMnfvY~Gj!e/1X04z4.5(n>So pтrjn&(QK8(MQrq<ՠEAx$N(-*|Cz t$Al:7?7$p؟1*e4!9ON@B-Ki x/fV"&RXav@3\ sR{&JP*9D t\ReY!E/D6'I=ZQ D'QMeM$ᯄWú@" χe@r779 ܺ׾{a\Ȅ : 6 u?NQ?iyw蹯g#5[B/5,ScU4y]!C(*+:hx׸xh|ޞ\DDY G3x ?h#-g˧u^>5b=P29ft5_aI-ŅbS֚ʱqsm9\Nk={|tN=xdiIޢ?Z !. M> O3_g?eg1ϋy?x#X־oCޤaltZ,\5vyu#)E*s0#D FP`0.-X ,+P&R@ 'X` iB6盉;`k.vPCTlv]_ϩ\󵴮޳5F u!׷œT*)R/T帧v ]7viJ 후".<*1EPjM^Y&H"9.̎\NGíig Jo_Okߊ _j܅#7:/7*#`MJ;U,&AF:iؒJ[I">BO )BCnh`ȽD~6G۰hzˤT`aZ)du"Sz:ƇSZ_{WP-SMz=GIH(JAI`WDw:gY͎F,e-Ż7M 76>+ո[ jUznfny/|Nޞ,L#}ymJ-yfü'5!@ h_" DQ}hڰSsCSP}z_>DK*MU+wF-gA^h&R1! N+B+eFrv5nGtr>v Z#Ρ@6l=A: 9` / |1C bHC4}wkHwWk;JAJC7ҡ4BMՌj6qKQ:6pz{0yP٧2ͽ{>+Vl!)\:RLnI7='T2MG }譫%UkIZ?Ԛmq7{nAzEs^5jM EɔIPFwks.p1m>fp$g0t$jg(cHRRI)CyUa3)Ji OU9x%1I+eX |28c:("$,IHa4N:圏ZiPt\ـ2*5fAkm};5.`eF~(Ձ I' 1)}TP.2)+")> n#A (@ú[RuL䭕84z [-D+YTYN_TS qm6Kc7|ڧ{par`c@pߪ%g*knXG+3uK7T#S)jᜉGfSX"HƄ$ #8Z`^? J·/?5ɂk0ոGr(R$}T8yVE+Pr]r5Hm FED8 % mk?020h j֮7)8Yhܬ*>j*k lc܃UKć#o/7ޥT?Nd*ԋU| W'졣UC|-.g}<̶,kL+'<_BU/TP؈5nͪ޼Q{yOxtvR@WOL(ĺZZqT JXLH Gl>aNߞ5"'a﹎~avs]{ǍJ2HyN56KG$}*7n 4AOdHKh:b? ɨm kᾠʉM;"r+):Pיsj)5br*UBS\P1f32^ɘlA'AP }:hg%@Z&&I8\ac Nk2 }7zfHjRPMo&IFI3-%Uw:EpL)b *q0ɡSB8E >Һw=X{Ϯhv4Ӱw þY Z*=VԅW䂅$)^ӊ@_Y?qG\Q,Jȣ;0%dDhn e,2dKpMF:pPSN%4H<9N)i %{l7 jQHoipp 'cϯZt ' 9Y7.w\? Ws)cM-᷶suk{zH+*ޡڀoF|yݖe nkķ!ub೛/Gd GEg-.i:# 4BIȾRrXX>"͘(t2Yn$ޮfnn1o'^zo׷_?|- wL潥!yW?īŗ*^>rʀ*cz;>۱q'T41A- ޘ5&Uiqj?|hgm[w)k+ϱ>|vGgv2Xk&XwZ,{z~ }Ƿ>$#W0 &o"ɴEdBF.}1KFao˨( $lp#}CQphp  N9=0%Gv^:xJ˥2 O~޾ɹVV#$7OH aռ\ͧypx)Un`t>e/v , wLr\ϧ-װ-`}zAU7ϳ >VĦ­A7õmvt91W1587Z$chIY2t[y^TV#Keɖl)_¯YŚ}D F5IV56lt]o4ʙc_U(Wqq \"?pyDpp@6,_fLf{yR-{orJכ q{^_fG-2f_oE7rL #ge=YYywKSWm/p8,Gdp0B)'4]9MGv4ْ.5Э9lbA'VX6{5c8tlFځeo4ïTCn=OYx1bP_fpWE4Nћ2Kj'ԋ/MTq4ΝD5ZYG!Fm֔i!IƁ8qB{'!2H6G[cq 1$Y9j G\B"a°#::_ZKCh5Q 89%WE ≌@c7Jrۍ!Oo"5h8|GzPG]4R%)u7AR|@GMGZ: 3|>\GzAbw;7Ug+CY7EoXQ%ӝZIJDԢkyZUaߕasgDކJ_qzU0vG`@.ĊЗZcŻ{|Vjo=,wojEg cF1ߤJ_DHMt d!sS>g߿HlybpBB,'д{9esn]7 Rta͎!Ͷ81ϡ_ERJBdpלS%&sNI{73K){眞ÿ2߽7}o\L 89AjNmw]q?W{^C #OyG8%p3ֻaYaWK&XgJ.}_D%}߼Rsޙ?/K!gé_،V8įSvW.v<+9o$Nʍ@ Hݶ>0s# gG @Z7x1M9% oehxoadK@/[MCٔ2 gd&?dJi!]w_[{[kxy'D٤КީO@"5SSqۑ$e-z:E.qxzCzQ8vĻ`y.FWbWv\KEd.dRZ3&QZ eeO= ^`ʻJηyc׍HbrsnL:nN$0MUw* f o='"'B[ $SFJk=r?vŸ>lidK`Qp<=IQpܭpFiin:iPZ)eS4Pa޳sRaTnN]M-7붃\߁NU7^gѽ>*2b ZE>=9y0}NH?uȐ mL`xe]C t R1S8 n8ٻeޥj8 y?i"@E?+֖ Uq~WT'0$[)JRy"U҆wR?9!ɑ{5keij?ǧZ[JpG=S軠%SyfѱtMkӒ;2!=v"4[WĹY*#qV3{AYٳdbes|N`rJ)TҾԆϙ2I>;p PzTJpA IC_^Gp% }3WQHFIg=š+1j_Ko"{ BNRj[9%t)Nw'k@;s2Z a֛G+.9%h-L'ETCV zG߫Su% :?ڪAH^Q_ƹ8wpeG>E~((TGoqF"ϿM_ :[*T IvTdBR?vV2q~œHѝ7NT/OȅQ}5Zշ8'3)́7 Qk!/Mkm4TtC~ՋOTn|syNb?~wjdFR'5A 6\8Jbm2MD>&LS-! >ũ0yz'ZPRz##:&`@;̈́Z: $J(3Fi\<g4PR`kС'+pmfܗ?̮5Y`Ai2\$i\6f7\CTw%3ȒT(B]Vh8 v!%"=0v9qkxA*K g\P"V!>_?ͧG\X..ò#,ӷaiϯ"q0y;χ/ 6+o﯇8U #<[ LrG_> i4͟Xa<M>_S4T?B3&M=d, GĄ(ϸ rP/8<v`&Pq_ԢI@˭G‹ f$D (WxIR.Tx}=A.(NJ00*BVyG2of3!LB))0pMeݙoC䀮5;k0JeCg*fo/~Z:Cp~Gnzs> 8BJth*L347. ~ovz9˧W_C˧wN h( 2i**>*Oha*ўFnz^755UkAx/多Ԝ\0&9_%si/\Ơ̥]U%M[1Qno &QQ.ɷ(2`^ 6BGcePH$;?La~0~uWh/^ieM9ưW8Ex0o\O0H~.y ݥ݅M@.^XKIq1|\;g m "x24BwPᢌ1pd[>I؟aL<135Mvi/l<(BP!Aq/@9:n~~5H9>K4:K"T~ptaT10Jsy@p`,H,2I|H\zcz_V[r,b@\S֨ʽLrIdKB;fA&o:qp'DK,:hI7u2c=M:Un, NI+(A;C N>Yd&T2%w|`^Y/rBB#srL-)5-ma*WaMHII$F4N-Aq&0@IPM )_dxf~/ȭ~zʷ,L?inxI`Ci֭U$њR5x #;95 яfq'sJJ+M/"sjRKJ߀ofc{*b(%ԤY`jO !H2"OBuVpye/;Rn=Oާ**cSǂ  cd*h@?BriݫUL6x;=Je*֌5 o@5Qa&|gcCŖz G!G [ir\3b^NM(k?Tjν<V?vQqt{>Z QB IEe`एv`1.GG.Fo)DoCG7fM}NQr<}#m0ZtQ}p.UU]Os;h;z[|§1?Pz;i=XƅTFD 8큫܏$Jr מYp :}6 TChlڎj' Z-zꉡ2xCB LDt&YaL[EU`,S" 3""#IEo_I)Eµc  %.C&xm_:w|w *YL:)A*K$ZT.:7.I H<3w3ͩ~Je|ޘi.\z2AM=#, [i߈M|ߓ.g#V!\|]GshMh=(?T(JъBYd/|jOR>7O-ĶVD :uCyKYڥL1Pٛ[@*baڍg,Xw~w=J(Q__tr=\.*6Ef qC?y8BRO .grm$b](_;}$w^^^cL?_W>0`Hv&Asg/G oNɷ]6DGz%V&k˻eF,zB~-Os.yW2QT# ;vQlAmB- MsLz`!KsMCbvWuhiOZYN+*J>n2``]"z^)x'_SYyUCBm(yVrif GꤼI-7BHUJχ@KEsn bsdh CPIgI2so/^N?M_Kvr}w2?/.NocBv'}f/si2j}x/Q#]^g<"#>ox?^wo^}F W׷SiC(gX=% :7a0GևɬN8YK֨EJU*k*R1Vc2Xe5ƪٯq&\) kS)7JǙ@Ie2d &-$ %`c1ˠAM25e53aV/ZY\Hړs>)KC!uw5z7tE$JP&qjgƭ?NՈTL!R2',et=Sd8N{֒5"ZIjIgDfRu3 kJzOfm AY +xԑ0dv1 NC>Ym`ux7G/.~cZ9oضS6VEvZE2G_NC/W'ؤ@JƤԺ$&N("1QTi=+pX cǎ`7zy@ZiST.34uz:$ R+yO*7._<Р:+~]46=l:.RVz.Z*j"Yni ^4 =l,RC!$Ŋ/BxmhXuxdxtIQܼu5n0 .x) )CDK |ʣFW 賻ɽy "?ݥX+t`&x-Gfӹ4dc u+0ZCiVa~' {TTT[t՘ ,w#{ph~i0J@tQ<m*&kN/R㒇PFi6`As(4{Г"[&ކIv;-aȕTb˳EYtz7+[=3J?_tצ_LLu -E_ T)E, W?D"&WgwW]f g{tO'm_ȅ - - - -6՜'fYJ]p5P:Rɣ!Yr+qC!#M !m8JtƼ@iB6!H+U&pLhJY_4KtN0X8 m lTCdA($I6p땲S><8ZDJPD, .;[ A4O` /Fש!÷ts{QnK6\޺JrҀ{j>fwBbcT^* NӍF0HІBGϑ?o(gYyj[хfxv_攽^ sWWpYPr)P^.he''~FeyЇ֠ Ь:3kX?\~GD4۫hu >.E O>dΣy< hdTC3t91ZmvEH]Q3wJ=3s4ٵ.x/V.Te_j5( yÂ|z[!$t"İO4vJMz-ͨ]/2|~҅FEl-<)N.ҧM/WlAHt$MB2nj&zݺU ,9+,4bv?W33_liĆ 8.=n9,}nc!T;bs)q2 yp^Zӈj KDŽ TGRأ\Nm9ۛ $/B #@Rf4Kiy 4yF i#kضXs5$kk3V %ցl?^طqIy G^r/EoEunS)ݮPIooVbMXSC*k?X𣀃[OA ZF!IQo |8qJ!FU*6A^bHE4eֵnE*J X를Sd zEkE2|/ymᒹr8JUPX!1:ZK"e+x&4Y1EQ #=%kO)\rHtJ*cn6HFA/N ;BF 5:d;f]52B][{|sfϗ=̔1GOQRQm~kOsADoHI͈aٻ޶nlW}on A˝"3V+ˮ,'MǑ%YC4#͵͵<~_zg:&|ha$O^.9^N*Ƣ(*{%)%0L)ӋP%_\G/Yi^z=o6p~m950^qސ-z຾~fg^w?U2Zന>3rkw[BSU)X 'd倘GxBǽYY,R͗=zN {qOEox1C.%nU9x%1I+eX ܮ%߬«tG^hD<t5'9e\Be6­պ.)~`Hkݝ(zhELx#\M| l |I:hJYwE7GK|:\N ;@|2G-j2~ShWض}ܢcF&+rOictr۱H7H^=c9> \ԠW$bN5!_W}CǼ95psE>cӹs@t A\O.+>$2gpE>魛x{ ]P79q5]h 8bTNx#GmRp =7+?ƋO1'߰Iߟ轥%8imP$2Øh >jɶ,qSӳϓpZ]gAtW/ԚyQNO&Xkhx` |7 >րBEtNa+\-%M\7):QiQothK xWIi|=[羊mcj>;xᤈ\т =2]F*z,O޿֛q bГs]h 8vR"w);޹\o:%uOrω=:e;%#D>wӽsތ;%Dmx 9a q]/'eɽi;;;_!&J4Ƥ}VN:mGU?膹.Q[/M4{)&`z_)87 b"du%N.U}NQT6M>#yx.m >[QS}[ 8i9u.7{`ΤVRY?ݢ{_w/>O9'Aa1 i\S-aZ>;ңD! Ŗ0=`HZsi#c흾uFϥ LvdnGП;V@Q횲`ߢau92v[quʼn3e7c~~q _|k=mc OBɻ.#] wCUkjo~z3\_L}<ګ|K/juo3΍-)|׃)NfwywOb ?.rO_">}L7~rg9^d+dD"QQ95rϭn=N#Ĩ:IIK2:Z;„E*!QX21O0`X/WIG|oF0'Y/ljmaNfXKl![J.kuئZ X5}[(*y{>%, ñ#+'[{e(e)>9{F0ƒӀ1XIY |'"2 {ZؼO˨rz͡$Y8,jYΒ&w#PrRzƝvh\D+#JQ:1BD WC:bT(l' #q5a1iw=_e58 ꎛ|,Mn<.?-/+uO>Ƚ}^ݾI+_BQm?Uڨ<۬ʗm{%%PytvWŘ.g%$$EHSFDW(+vB ^n\-~qnޛ=H<.b\G:2p' :evP *FmI2INRbV:0s42S-s_DIQrjb-Ŏnԁ΢phkT1Qy$ UGˋ8>"YGUPMB9`URh ]tXP{V$,L;f8v;扦,PmǼؘCd6\ʊFh'>f;8>5y"ޔW>O<1ʢW=^'ׯ^C9`=Nhv -$sAR! Asao܀E}b: SOFci ϟ_*l`qj}SKu^4n@s=0"SFљ+ʕoMخGG)Q6!\>G57>48D1I p-u7Y!E uB6Fy!? tQn9 N?};jz=+ 'm4!ۃӿS0# d쟷ڦ99ŐC@&@[TL\u!eYrqW Bo]W(`D 93Fyf8=ORMi#j!Y<2 oRp06j&&ě#/ RHQ},R'DrEk2iOLդ8UD0`KQ`-Ӏ҈uQ i{`ޱ g5PL864}u9{i`ZJ^|%, ɧp`bE;jAb+؂vzm a5!;VHؠ;Һ7gg.BK%捯'~EY_| MFy44f|!m _~lu=M:~ u]Y~~g{}=8{0 n-PQw?*z4 ԱPьZ*ihFAm] Y3K߃krpk {8-~毡QlsCGԎGERVjFfHr,R'41}9RgK=|श۸8%m@JƵ8XߠGh:TsZ`|Vռ&TmYMk"d-;tU;ǭ61oᚵxYG& B8/y}bz9Ư|3[dx7j6|nO'r{L{DyMYW TcZ-6Mn/y rB),a .3m 2s;+"iZIݜ]g?s/V\¨ڸdcbD2"OHƷeG=$}VMIк#Bj7LW=E5:j i7+zzh'񿈟%u'v z w'r~6_fHo/fd m_؅aN7}ߟ2JMˀ΂2He"jc(˙b>5g.y"b3̓hf9|w@1R[jjg+͏mDL`1S׶,Tlr8Ϣc%yS`rLJ=J Bd}Nky34]%zmh'knZYg jW w=C<==CIQEXY5E~J!Yhw S !RWQc a-ӐP 5 z$9k4S7c]PTcU-7\ZNFpha 56KHdന>3kfg!z$9qoSzCHw礍|R<9Bē v*eV[!*x7NhtKh<9٣ X[C͓}Ap&x.0$Y6QA*XpI3yְt¸>}$;CSVF ynяf<;f3or˛ 8 P` Wюg<=|=x[F"4%4zeLDȀI h JT[)U4D 6gzÌ*ph21+ @iYKz`=5ؓ`ojׅ?9Izb}(B^Iva kyX)uC8j6*TUMNd2H$pP l\R 8/V+T(8رZtso#IMoݫo.mm\igcXhd g֘iO{NjmrWTh J2~C+ g[~_P *Hɼma7iQae=[hvCKb>r!{vN%# S͕@@51&ҳdoxΨW`<r. EC/GÜ"{;ݷ'daQ!}/Z#Ք?D0{²K9 4|u]y"oA0.e Բ5^! U%%H[Wǿ@tlqI?D!d!0V6 㱲Dm=4fI "ff`8A' t43wcȟ㸓>2p!Ch aD@C/ :Z h8<k#qZ;4 9 C4Sel0H1'B{fp禀q'tqMHC.GJLBĒ?¯LVJtRߝ~ޥw}TljMMW "$Y)n˯dG(lh!Y BvE jE?fgv[w9T ν>9]NAaKkr2L%%v=8{u61<$}j򇰯{gafZo|,F䧝Ts/?d9.k۰F己^9h@;L2k(Ӡ|r%/ #c rP ҌEp<aiXoSֻ/0 `IIv!cJl)P;P1;]S<jAFKtQHӹ"=CL 3z(>'=|(^z{3Ŗߘ5{yw60|FNgEKd;EA6QKLw|ux[H;ōd|Eo)N ,^oR;1hI\#U0I?~$rH~MJvU' zT]݌fO]?8 F7I`:{Y_L'zaIKz" ?O7i6s+ddC6D5!T8ӡ #KƤƈ[A(K@ Vҧ៥{闫dd CCڇE[} ]5_;o\.+ojUEl!5 VӹJF7z$,NPO.)O&Zy@']Q 9Ol-户3p!唇*8AML7O;l085HPxoaTް!yԙufu^oV2ju!}y^Op?&g=-|fYB Q ?a@óad,Hw ?ճa}>́ĸm*;GYsĐK_?_t)q+`;Ѻ(d'eZwg\~r'6{bV.sStɁ\.]>tt(2TVߥy o̖ yu* Vs=Q"Vf9_tL4c +pc1S yzwzH-Ve{}8[-oV2d.7/_A~Ρ@/gۋ-hinF{O-C޴o!PzpsH -WKKⸯXUKGi:~&ٕ9\0e*)% Ґ971 BXkgÄڤ)ߋlմcՁVWם@w"*2 CDtؑ0 = 1|@]_kR{k`JNu% #c r)S^殔7u5mUY|8KzWunZe$=|Omҫ-r޹u7 Cמ1cR{ǦV{xVxzw&k58J `Pkn8 hÓukqkstj2%zħ[FoYIɝ"h5A |š!A0,kQ'>v;y&=3x=8#QQPe?OE[牭.DÅ/uhXʐ7!$?]к/G,;IkBI.$B#[\`ۂ8 gQŊj , 1+hRF&"dBs T!- :⌻^#?յSWM=&^;}l.]"D Z6L#"G]E,,ͧt+%Le|d[y;"y{.zӇ/E3-KBx;lD^dn8EB[y4,r[cqJKCBpHwu!6E$B|ء5&=O꿔coqhYS |X=;/`HHX-\!(O}[KD-*NxaRufLHW^c:6]y$;ń7C uO*Ĕ 84` zZL'OJm2W5Ą2thw1Xp#09mPSZ 7H ]p},_{d^ $e>aCQ3 ;D ԁPдgk&YT, 9 (4r;?IWPF (R?X єȬ@XہclUZ DbU9FVh2[{.pN0ŸLZ Fdv'ݹ M /pָJ9/Y8p⅗!d (k @v6omG'n LDlвQxBZM;\*k)!k($*VYnQM8j`0P ~$H2vJ.uVX6h*W~S@{d5Apø IA5" jPcs:3:)c->{#Ԍ{K#g:,7V kD@0TvpC|R9oLUz(lj)uafZ\%۪?dcbvJRL!> w0vG~_,cF/.lX,3&^Ԑڞ{O3`ᯒbS-#V.*kGBfɔzw-v;9 ruݲ'ڐ'.Y2ki7 N3RD($$ku` YrNi q(q>9=-CXH5`#hjB9X*R(`ɟ GPS(@pr߿yۗD0:9ETJ@oby%>٪2zW**&tiYo~b,:/]]JqwR-0rIQMS1i{I}YlT>ET/P|uq3}ŏ(>ǵьCXyhmd1C8MГO,۵c(O 5e.5_S&X.uʞ U朠Nj?PV'͠Y{HжH|T)&[5~bqVW? Ft_5fel yzӌ`=-,BPPV?6WaEB ~\'blՓAAaT⹍ )Hp\6I%KTX^ܓG2A>aVXJH1R\s> gd!)p.pdg29+.F7 7Whdu&]I |c",Pue#"@ \$>VZA V_!؂3r'RD:: Y/E&)Ò gy 0=*^2/_[9GBMծqƹ*jƩV Sac>漦9eDA3%ɠWmz/>G_{1hY8߫^^[ >2 gzCg[`)sMJ6X$mNG!ns)c;3f%͘.m%3bx@N`1sy¹ʍGH b SX,v9.hlI^ :Ѭv?w^)A&Wz @0t:&rJ,"FSasBRe$Z˶&@P S*W1nP<69Ws3{ b|-2=~I.<^?0)x K na^2g!;T;z x2-p\YC>34egJl~~H"rm>=ݻI PD69k Zua j"M Q zMMs^,c*{38{ь_ կmTkU:'j\KΧex)-4y+ nTYQaEOL'>˜Sg  b{"ܝ٤&1aA݁\n.3TRm6|??J%nRy)Vk2L5(Tlz0f4xaKͤ`jId #aWTZ^>zŕe+4d@ƓVƢ<`V1!dyl%s%>`8Y=h5WRUaUT$G kŋ:$ )4F79˽c\٠`#`uR/ !cp'yYh b  b). {Pn;‚6p.^Ah]ZbՎ:">RoX0+V=P' mUjcr+TxJ"ΔEg ᐃuE:2#QMxY='2y7<]α%hjY3΍+f0,e#02pq  uME yCa#RGa>&ɑp6RaEZD @., 2)oDA:``  X΄8x(!  s oo9`'ăc-ԫ_&wqf$ }YPƭ7dSlԦmȔ@14lOk_C775܊v"ӣ6ezt2x{2=ap=jP%mWQ|m(Qêۃu%>h< c@\cĶF)_4ZۇֳDnȦr\*{&U#Hh0-h{3Ȗ8/fgQڬ6:6:ȬB(븷PRoԡF18苽1K;3 Ge`_8Zv[ߺ*p8;*8J7+:Xn,Xz 30}/# RTQ(FPMpUȌfuw6FfG/g.$-6t_jǶu +..Z!rKL(.Uځ x X!Mn+bXwRk*|DĚKpy\yJ YbYK;xÝNv>~x?eֵ@e'խۼw>i$G f#@MC[J$WR%=LfQ3n&,4t?f$]hqc+快m!ޜVo"7n+7M7m",M9 ;0 M%uw\u;{ %zdvfj\9vZuO ,9pVӓ7s^'8"̠\r%#*y+AY #-NlZՏhwuspލl*Dib\WIy7w6kC`EB(sGv"䄴qM1HFAY.3NQ /nȎFʔ[;coyܳ3oA?!:ͧqM _Hh8leiq5rcrjD%8 P`b/ IOg* +ga IW.dJW&bnu1HEN ՙv%j&$+eJ N6X0]zYi)*Us zl<,$Ok0/n GWv=^·~{%f2\nhzWY!+'ŊhPKҥvȃ^M_#`U`~wB*8A!8d$-Ɵ|[^̠n'gLu0ݩBѶ+5V.Ŭޖ8bjnD٫Xܟ^ 5+a|˱cmo~<5#OgmC xHUƼ!gq 8*bx픖lP ذ|֗9~K}wڕ +^/$b9ZDObI!SvQKjQ]=c3ď1A%j-/~jy6Po2hԻ`iM"%&bMuk4n^sŷ'QEc{?~,2$ܛV''S \5 )R7.'c#i\xj׌p7{`gTJ~.Ba<}2S{S2~!WH@71'Á_O>@ǧd#"""l,[Mx2'eU09e2!pl F!F8&߹)؆@(J(KSbKp*x/V|$@# ÚqZ9/ZzÉo?] cO#c;3f%͠.,3bx@^ *7 x2Co12Vs0\G1ؒrpׂ@4xp`F@GZ$VA`ծNs("@ 3DrAI<|?Y)=.gS뱙 oC`?fw!D;/6bC8j#RF̷2"c c O͓4j TnF/$*Fލ(gg#=0sƶXp}|$!%|xlF&'&Eaꎗ~Ɓp{qFSP;z`<{(կ{)`whp@7LL?s3XMQ%j{ODT"ݕ0A[c$|=XI1W Ʒɍ>»hFaSb)(k8}ZsSZؔ^$v^9YfLiJ+gy08x=X48N5)q@ʱ^hmsWsDd Bn';3 -/,[Z'uh=g2)m%|ƬL UT!xΘ-lucuІH>\IQwrґ^ROV  Fd>xal%*k) ]mLgӇcB9DDĠ}RfYO4gFfn$0vQXJIC,bYr-M x{Y=eC@|{~É7i\ֺJk8܍tr_6p"] F6"a{8gT>؟?o_|lr?jsq+bpC؜gkxc@<T`HyeZ<9m]T1ނBA'F&hg`T&L`sq$&BpQ068^)jxCcwem2wjnSXW)vRS322S*R(ʞd*"\llEgΒWy=Hal'[Vl 9Ըgm}/GD7ן[K3xM9Z<}nHQ4k?ynݜe,[7gٺ9[7u=2̢_]̳&<ךZ8PyI+y17Xk/j à{ hJlb L3۶lvf G$We~b% C`D ڤImA`D2 E(8+D*MSq44ŚM!d]ˡp5hkx`ܫ/[?M034%I(z8I Ҋjmw$D䢥qv]O"Q-jR4)'5y)TRwu9Y\n[R$Mh4Viߞ{iJ}9^Uw|+Rų>Hwh0&T5HCRnm0QH)"ߒ\ݜWRv[.ǧ{gI] Se0lDCj>澄X$d!!MFywf:"=~H\UY~Vk,b˪я?!;yǛg;Qlife9٣TrT >hf(屏7PGεq!TjQm8lF_JcWee@:$ v}sEi"~qp9__ЬW'# KUz 7vM+PBdIFeK|z@vwU6azhz:"Z;ٽJ4d_cWjhߩ@`ȋZ. bMy'{ ՛{ Y?"EOery\3d/o꜋vp>>5=5@{zSoOjC1Jx{>-,(׏/7BrJJ=K ) xp8 2JCsƌ%bi:06F8Mt^o9iPaL.ӥ V`Ƙ P;퓎Qr]RB<2)zUCz&hNd[Y G9Z"U V*#C|GID,腧-#PͅD4h^ :녏dyzPNx{SЎVǻ,>{s?|'h1G]J!Aʠ~&447+eѵQjEHZA`!'}L$YR"\t~biЌ}-fm^#R Lߝ6n xjB`,1do_vc5@j jTlQ\H4 }L!hJzT++cּ3;yMp=Cxcw7GL{Qpxn1l5[V7]🇃Ft#c8xu~9zzf*';e'贆Ŧ`#e|8?r3PǗG; 8׃!hDURHWYȀ>Hܩ8Lf 'W[4BήX1SMT fJmJ)i+1W|oz'm~1]ʋVџٺE]+bZ QͯӦB^_u gNd8mq8\C.\O{:7V=+xzb~>Pv@:?(zxӫ_:xqg-)ql^˭'vέW[J27Gk/1i6n9$ߜXݩf3p5?5 E k[zrpw)A|mò6, ᔀFbzo7W58:!^w0"bZ#^A0%NNNAJU>^y1RffH;v Y L[zt[vtHrpl-uvsP4Cacmv&}o@&( tXDeƟ/m6k*ϟ} VKq6[o7WNW,OS3o *#qE S(U ڒ=i\oON?ե;4cpZ2% 'f\܌/U%Hg(gZNFRO+TVVEM,KJ` QANyJω12' N ;PD%O4] jQ{nLHFh!FP6" QYJH|t* [lІj-iՀ`(!ݭ~\d?LFe"$ςH@ǥW֡ĉ ^rv%,RDOFEj%Fqڅ|4:'yr%VeQR¤Bj@z1 zi3< =Tdc3`PR&:DkmPa%F'h 6z/ŌRx+%8F&jPUuw1ס7rQP,V,xV q ,cuƖ G j$f$BgaF%ܐ-(.Fޮr kcI\fF #MȒƨ-{Ώڞj= KSnKE;ZkdA#q_/t'FZ̤,ԟ] 2 O?D.GSrZZsհlv7O +T)E*2Di"ͬ78ݏǿŝHTW RF_7oNr|7|[=d4^ O\ $ Ly2wou`֌7X4x8+ӭ1dp3[5^^_Y&ʡT|9y2l'CPYLuw؈q<Ĩ2~\߄F6i6ZU b)Jp*CrI})ŭ.jiW%î܍D,'ELXgcqE?X5$pCEV"Kud!+"(FS ɒlwٺլj6$nQs/lpz]Et<ӮլKJ ȃ}F[#x>* L_"+8kr|q D2tרa|]E8AK»+u q0ϿP#:':kfTڒ}L9J<}@>U ̿N;&fp=1 2}_5zCeغÊ:+,+^u%=`LVQmR%i T*<8*0>huGa<X!o=m_fô@~m04U\^˷^N^H.%%mmZlkc jA')KGAvRk o}%Dt/XB К_2 )%D֞(6hC-??2pg q!KM͗"`+$k6k@.j#$_Ұ %h$$e/')a]qk<ݪ/Cf۹J2y44HG#p…HT24/M@!Z(:J9D%Li˄:(.)Fh -aS;JIH̸ 8%Tk!"=tD P/W(;VDGaZte Qs@Ruu7D'*Sz!LB|k Kzz~4AԷmV0{jE"|劅G=b !]=jK+'Nh"$QH=Aysz*8%6N uP''dѦ _ Xi`A[tZzI { #n(Ifo,_O᷁$m6`VٖdJ`EdFl`M)ʁWD:([=s R)tH>lR:zx.坔rZ{'D+B mu ?~0Vr<[;Ehm*ŅS5];t#%P:Bw97qK{NeBۉWBCpY#X OD8$K9 Y$ˌd[фQFD6zY5~t=z0\_y6oQ>^ks-%׋zK=$H\nV*H>EPW&m5%xw5A_u֕dps?|'GPQ2dT ɐ*ƀa~j~"$>M,w^NΫE\4stO!}*Oه] Bq|E yJzvKx~Ldߏo8[PM)5TG֌]tA͑*=j9)\|Q*7($%֙\:* u䲹-_79ڦ* kܤg'/I`6/=uѶGEVsT Ph5.QpkEX>) k5+HTXFXJẙ*0hB^_ԐnL/ Z}5nf_p4JL2 'tWj~áٯAY*܂^לntn"[LkmNjs8]Ӽ7pqqpT- ^YZe˝a,6hbE`y!C뵀|j~Ւ薱讏}?@&m-wU+.o_y²Sy¢u.|Nz6`R:o;Ϟ9WU'rJJ/J5 :S?p$W$7H}HeRryX.O3F,4aMB&Lͅ8vxv NVTioHulx; :*k47s`+ 5 J!=<9["Th[kنj^s k؛6|Sϗt2vy)B=&;m{A~ v:ev?{Wƍ K_6#Urj*8pNj3H-)9\_!EbfH*td F֊[ KAkRYy\ԹqRx-馀N?{ڸ.ImE>Rѵ3- ΎAyVT;VsA qvM";TVdSD ÉؐZgJ 70 ۻC [&*b8p6P0}U\ t<:hOFM-K"sk#͎w#',h<UU&F#T`dcL }m%?\poO!ԗ D2DQ%mVk֟no_I s7g87ԫ`{eEɞnϺL{/#!{&$'-Gs%AQD =Z #וQ?#'3ZspHT! apT ]K~4]…Nd +vWA󓠏>}8姇Uo8HA?urG _UHvjo,~S N'wM,m]H4nR/ZHd_^RY{iiWR8W1pE`嘱As?/DŽz 8qYDy,}e- 2Txd3eg,,/6s]YeJ5ZoC~;x.q)I E]^v(C_>DfB6v'J S_\A`U'{ Ü׆$Z0*%&g1KHR^ON+e,`4oqԧF{u7Ay6K.vAm\ -)DUP2O#pqc҃:bF_O`a ;xٴAׇӱw/Ϻv"BvxNN2zL?N@| =G qEs F, K:b*sMC.z! \^K 7%r]у~,"]9'/m 蚜g*Twס Cgq7"wUU YUiPGHRG}4ygg4/nJb Cw(L;}u3mT]MiDO'h蹆Yp{6L&RGA6{o>d7Y6}XVMeiqD{q"K[nUP2B8v(nŵʓ&}IU"_; ~= ]XX0{7T;6NMnSzkO*A$;2r*lb evjI v~X1e3id42?Sq~5x3Nj 0Ā<ōG݄ ISȋϵ$vmi6mަm۴{[۴t;|$#<{A6w6˂njs5A|M6TdӥlSF1S6(FI7Z1*5R Z/|b o;0|p*?UR\!Fc uåƙRkęu젫zmbKmwKVũrBvF+o#iva+;&D3./S\ fY2'_,JF뙷l3癶1)BO|re5qrpRX@YZJZr:r9rS#Ѡ )#㍶RJqoA˕%P=JeAtȕF%q#:V2h:9i9`2b HL+R,B֎H)GK T)Ћ3J0d$ 7@$?BH6Dp6 X%Mcۇ)Q[; OX .jslۧ׽&IDQ9'h{C|UM_WHrw "*D*_;?|DR9oG#__oq5Ыό r?3Yx 'ȖØ ϟܽ ]u˜Wuo_ӶQ^4i/Į9`5RB;Kk/`R%>:%P$qdk ; ^IX: E,Sꌱ[ ӂૂr*embHDLp('隘h'cM׬;tmKGl[t.0; :m%Bpշ@h4a -/$GwX$hss=dt NxtF,s"J) R"YQg?) l["  R<)5Yɀ빰X)rXFa Jsw/tua(w>!ԇ}qK (Y%8JOJ0)FNyzIN!Z+X\P -/\3̭t'vi:[0b<=rLG2B!ݮCtc-/՘PI09t(Zmnc͆JZ9gl2MV0? 9Љ`$C3D3=N`8=^vf7j{W_|a8QH) f0/@u?n*d rVץ,~yl1yCv3:dfY㔉N k%7Ee4dq'Ph-ğ[gڱ>uc*ƴĘS1:cL z>~{+-nJ{혒.崾6P;l(grr{۱oS[ vuѺ1ﰵbL0>ui}28ɯp߼yC,_)#IhYw:VWh8z#ǍRȆ{$CQ.' Ӂ&yk57 }mY~Lv1~V֝^V %ځK2SXII(hP\>JBdĺ(`r :d7=&R3|lH <ҵ_&ӟtq(~qfƚr-"/zjnﯖ ?Ϗ|=5 ,iN V0_N-.߾]kPܮ ?j0ޞN g MRЋ)8vb0FGA)L-60cq]1Hwu,5]RL8欐HFnDžr"wp)qF"_YhxPNFX.ܮT0T(|#e`|pjtq;]1'!SWz_#kF(鰚f'FZL# VBБhNE1iCxІw)*yFR ?cL1k?g0?sO%6(!ԇ%ȭc>NAWE4(j'9#;*(j ~3ezL& |ճ9s[yrm<a׃G&_E=0Gx@ZkVa`?ݻ2%PiUXq&)Y;u>ClB{߄OC ep$!9!84X~+2OaMvVjҟшѰrP.$wl.X;z FHr~c-oJE#W` R>%|OAl3.]Ia/)#\wuV󛶊 %u 'sN yNP'sQEɾ[bۅ"!33tΙwq$#ܕvS%ɘEwSdzx_C'MOxNj(%O*;ϸTaçB ΌƒHQc"\0V!'ݧ7O:ۜi~U@f,}!$Qvs҇w &tgw l5Kp;:8[KH4m53p!~ <+c3wyFr*ͻ-ve?3DJXEFT&*Z!"LP[{4ƝJb!620;DدQ .h +nG枮Mtr7XAWweQ{Y`|z7i> ^~kCfp}*ZMۑGpL̿`'}~b* og;{jw×ͨuxz. e-㢋^~~9vgښܶ_Qiezp9o*u6W|'\ hYIcǛ?(Aoc>dc&ht7KӟWyE C!@}YI7vmGQn;i7 tbgsVa5pلzFn"]=|[e^DK?]|$q.I31CKJhچ'O2w9^;fl)b{j;lw{g:׉6vWwe*JySJW<\W۝RJ/68٭蝨`}׏6ӽX-F[G).ct~*_V% a Oe|G:Uq "Zʎ[YH'b2d :kɪxw*dB {Mu^X||8GaRˡ ?nӇ2z=e+yc8A^u*7C}ÄzWo? }?\Roz?v? A(ksxulNݭ9oF}0CfOt€;V5pFl|4RlT>R ;Cq: 7 ƈ2f;F$.ܻ1DFˢ K`(dD{aZ['>Jl1ߞx {DwY}A; k1f9-D}t{h|N`o/IM %r`#Z$46Rzxd"qz:?(j1?njHէJ)zN`@0CmP{"!FWVuMuF)#D!j!0Rзt AS&hԀHFkݰ@0M67ˏf'u#FjFch䏸cKG5f"-"Pe K%c CiFp(ylB$xTGG@$MҹW7j5SYZ}4Qҿ17E1מg^Lz={1HtJMPǐ#!$G+@Μj;Zтc@Cmtz4nΟATAWmj{Qz3JE?QԔz]8J(U.>=[W[]q?qŚRAQ%zQ%pV WygXK6-n(2c #&0z$c2ݨD{WC1mX#Pƒ4u3֢Ňåfo\b"ZJdTxF<ŠuŨlu'.6'%WsntͿxv7ͳuܯco1x; eZBr?o˳Y??fhjS%2SDy{cvfbsZ%9c#?8;s!XUATܙ)Li:iI 7<7 ~ qmak1 'vO{-R?_[J)e~ܥr^؃F~ڋ`o%'-g;/@Q*x3k@#j%f!4xƞ%pʭ<^/"Oy(^l~J֢.2LOԇTA j.@VGfPۢ[bsg#o=۝ebޘ/<ٟ:1-]" i˸?f OKƵ@se­>m/O( ԭ­{_M:9* Rsj51pB(E&FTA*;%6B^[ƃ;!qqβU!o<*sv1=f5nF}u33@!zUY_<" m#Aޣ74/Lk.7 ;orV>=9/ԟwPs{Ar˸ 0*/SJS9blE_b6_P.C |Ih űC:X*\+|l_j<"mo-?k'0K)V ;BЎK#=XɆK;şE? ԞݷR;9Moq9v1*[Q I$ 'N2)=n . GP-J֙)3ATR)5p2 @ [TYLNT&qIK9Q5)R`2uB; iNѴrrb1_3O˹Q#d7'ӫju5WO_vAw_m3x hyaW~<470x(nA :9K7Nh/󙛷nR->5||2?,W.M-t$h,5$p>Tp9 TVnl9$zp$L\L1HseYQH_>"}a8N T&۾e{2Q#60<w>DnJ'Οey! $Z;@W5ŅŸtnֵ^ e^`*ΕK:;) Y.& EìE{q0bmI\(x0lIoq|pnIی$Z D- QNTR))񘸉 Pwe>d4JL2Rt A;fh!#J![#ΰ]-:Q{G -''}1"M1/kte}(u}b!w!=@hH-FΠԲw-F:;S2j|V7Vӗ؆1x<XAw;A0N{Jx^}՚޻apIN%}I} }G8rTsx}'O?OD{F Y*IӃQ!'+N)ܟA"p/!6D(e Ψ45rifY (xz6"~dpAqR!D&:#B^g.) /) 0Q1pI K{m;L<: 9>K @)R5CHMi:CD({u xZ%uoKp8]FfcaDHCA1E,r3 dTHRw H9cA@XкAtj4h@ Y!8eI@P(J f[C9CfDL#JA,la\LSb, B 7vi| ~Y9&c2+D1:k:Sh P$mw28̕DǠTZ^>YT'!x io4C_\n=H^\:̰ژ4Ӯ ukNs,عfK0X; n꾚7y*`bW˛2^ձU:Hn5ڴa`B,$(5(*A_*lPf .`Zh f|<-/hhtBs)r{믎c}>|rn]Lb3Yʶ{խ\L^LIfY/Y7V?op=m?~[l̐1[oFS/8c4~sl'ٻ˜1ڙwω<ӟ;'z5KgŦK@:ތ{ʢ!mKhN:uºb:߸ĺ 9-umݢjА\Et3lݘpou Dubqu..sbH{n nmhWQ:UX:xغ 1u Dubqu0G"@y˾k(%к!_Fͬ[={RKQwG +S)E{eVmLU! x~55`ݦ(8l.IO*!DadTy9/ y뙋~c<?{W6 C/;ӐȺ=c'uA֌,9(R$dHPRY_VeQrwطoq׌ӏA8$إHxZk8~M}kwy3QV[wHAJ\,e/#O;H)9ƟLlnWt@S9z0؟Oh LVnfm_JojDU)zFu/\ 7}.J2C2nHNy/q\@x d[l BA֒pY J.L=ޫ͒5ΒuYVIwx;,%iA6v~-nE3^mɶy|uoѢO9쯴T,%x Y7:E܈(!e:2'K`=|f!&/=M/HYQq݃dV7ԣ}5J>;[H\D=IL@yStU`ae\g6KQ6*|:p/ Zij)ā4GU|ʚ޹X*jT1-5\%~@[j:iv3"NKݙCP!7@>[67l7MR?(PkLU%ϖ>< zsa3!G6 q0RR#EP QaJ9٪.ݑ{Pk p>;W*ҝA(lDgsh\sδn7f\b2 Dl|xۜl+H(]qF{HLd.Îކ-4 vلml"[j ["c'6ɯTO/v Tȟ&o.ŏ7D=ޮ3sYb8Kguɽ.dhPE)h% xC"B4 PsRXx~Cf>sWۏZN>dNc'Gt߷S]f݊}1w}f>3.)33wD) Ќ +,D B}{O_0GgFn%id:3+H K朤mB)}&hn͂(_ vpIX[{'9T?=JD>8tf|GZHŘYټNys;@RK\j6m'nl2~Z))ObϏjTx~~|z#\Pӛpwy ?Eτ7?\ڛۿ^\]|ﮮ$`l<.R Wߞ|:33 ȣ! oʩ6[1L@5O3GV-M8x7(BFU*>U]~Q{M%^РuT#ǚ7::QkE!w0.V E؆4d 8Zr] :JZ % H5,FN߅6`FtrR3keC4F;$3(M jd,@PD!s {7q}NqƃٖcvIyhE12:n^j}*kKą*R"D#O[/X,fu=_f?v0JE QW=tu{_(hgьlf~0o4S{5^)_cjT//M Q|!05񰌴Ҍ<_l!)h/ ZTkͦb{]_y2d/^DRPݫ?f8#nld'#x'*y{:|\9XDZ$:"ѱu:,N+/䔃𨣶ΕS#PL `h 0 Ap罕)ISNvYW.4=Y8ZmNA" Κ e4pǹ^#L#`ҁږG1Z9-r©Ȇh%ƕSNxi愡ǔRRc`lWMGTbq)@7Oσ8)ZhI&E .f|SEШa+4VU(,Jg@N( 7)'.8@s%bSh#c2haUϥN]dgXy,m##ųT]wBG˒r" VNJ7Ɨ$FОJT,bu'm ($|+76ڗ.(TI5NR,XjmvE\^h%$ ;F.GBb`E ԲA' h[&hъ D K`e ZZ:@K`Jk7c!rT) ښxUKTaIPp?f\r˞-hբ:͗TK>u 5TUs֠s߳FGHHZԒ˕Ȅe>񻵠cam2FX(iJsVJ1# ,J L-М5[hZOOrs9E*Eٌ8̏M`A.g|ձH. 5T 8Bfʜ= HV?Bbϧǖ{T YR"4`c`b`TWw:@C=UVfӧVHTbrʴkSe ARNTK&,68;Fh@Kl֍&ƌHvD糠Mo?8G jZ<Ka(Gl@:4fRsb.\Wm';r_Lh1J;4V6뭝ɡ3|-b%v=hNvJPPt+z\WAQrEǵ zZ+%$/EAsA8͉a% as%BI R_)c>#%L5Nnu3tN cQ-FQDIU y;7":,7uYNLcUivB=סl?YF*.XEu^q:ci8ZuZ;o?]*h wu: Aj QAs|zOJN8 ҃wRUztJY"~LKt,P>"PδDzZ8"HԚr-r$uP Ex! DR +J˃ZtиS)3rE}X'{#*"ncJ2U*a V:'X C12jJ ZhQOMeS<(Qөs KfبKnj(*Kz vԆ^n|nT6԰ŝv[gZI.`#m J)8D`)_H﻽J,=UvKl5$!5IZOOL98\l,Y'G[yܟI/'>^oNN:E C5SN O"1|s[4@y>QՇ*ZXW J Ey+hm|77l9i_~Yl ]wfg@Y0muo&;Ieye“a6sGb (W/-pJ@zqt Xitw(D滋!_*UGAUzPdqurnvZ11T;S`qZ5អ 􌢪 {"Zs%ʝڙz~%TBT}#%КlW,m/5nwIi$JS[jw-O3ϸ̹G 3o& b0ۓ!gH|;~ňE?{yWvUjdZLzbzx>%Ckfmn6//UgWvMlJ2#)'{.k )iH% 13ZJ9D ӍFޥXߍ$J\sڵDw kᧄhR$t`}esa'ҫ=v x]BZG9f0b~szs=vd\|XaCD?<"/;RKxWzٟt ³ąσCN"ptz%FB i<ٺŤ*t^k=܄:]#w6=:/7aV~=sODFгsE yT5ƒ1zQF\O(c-i W.졞[7H[.)F붿3 ެ[~\uukCC^f8j 5&21HZ3hmJAe1n-h W"Z COoXx4떋AĺdEo-?+Ѻ!/\E3tjT~E e !&}dtiSݒ[ڭouKI.#+ݻ08w{W-!0z_nN'+UE;00Y9N? W;#,:f6\\_EI=M!J}KDp~$5qIwqUd (ݺ_# .]9bHC -ttfmN!g5RxB9!xQtf)j8ckX=%#i%>ɞEVGP\I\t7=J<`zIoic?A욳$'*M }JCjL5\44ISb5DZp*sZAߗ(8CLw{63_/]mGVJ%5#_K .R +k[r>ERc|"UvP}5B e;䩯- mk髧 "5uHspnA.b$`btfz9k(Vklpk_b'{l$^;'E*,} 8FD:s) ![_J 1zdgłwyYO 9ErZ;{<؆< tyNq:W -݌)Qq( JPF,0Aipx8({j=c)ttDU܆NGt*ZMT?,N_zg7gaQ}%G+SاױCx\i~LG0w8#BL[𓿹ʟ=Ȟhq׳2#- (S[!X,u, -aVͦ'J^LG"kIZ~340U}pY Ћ&wC01U0 5U0 U:W@ZB90 $Qn3+=3.̓J^ 鍆Ud#uĀ!|Ai_+58Ple[$EB0kY@Jb#ĆB )-(v[/RTHL"h9ʈCkPҊ[jGXjJe$P`bvr3ZY+2ВcI=丑XX0]`)ʔVQIemDٺXiA- j`s3XNy `(+#PX+`P2n&"6hݳJ ǘS~}!B1񎌙jl=ЁZu t>Z V>Px13꘬,n_y֩hWu' X1RmS#ʽfst{ ㇉F$|gUL8vC"Y8pY&ЎA.>5,aɿn_tqIQ̬_@W<I~W |Tڠ~>*RwL̾D('~ň{,ٛeN?̚RӉImgOyŎlV腳vlI)< ^Ҕ=e%l7+,iop5–!Wc̃ wkmLH$p$4JM CƐ!(K!"폩1 , 9jܠ1r0ǛM*s@иP'V܌$Ğ,ܘ:P8r6,SXRBB{*`U \-"1 ) 0ιY7ŝJ-8!K疯]^&ҊŗVJ=e8ݷs#ՠ6FN/SOBϻ$|,WK~F3A"-́yVnDq57#XDrtjss(Fq=zGL:>htif?/'Zsxg`GhD=赨} gbDF#/Lm8{ߜ6܂p)YIyM^n`r1HQ6X=jқuhukCC^S\jɓnsͫ-)1c+<[ #U".HE9vc3PjD$SD8Px~M6pFU!Rw3*^FYgjϤ]oj<6̠1 ʎ~ |T.ԳV2HR-؅tz&i$Tz_LEDDq"|Ȍxi.1e0;vDlOK<4H uQo*l^WuNj0 e/gmC{ OjWPVHUN̮\a$['7y2"f*,Fsͼ:A3* {DrY) 05~%?!d@ W ?%jU&V[' *֋ѧC`q韠tB~Q;y\v^]ԪOH48oBPX(mО@ ۂXif2d܆rY%&532bKyv9./c'C.cZC$s0Z!Yxl4{F} 0f{#R ]ˀb*7A?MM.1ɱω< Ns9W<.!tZSNm3Im) A$uӐFkdfQ!a+9X2$%ڥ&J陻/\ο|K.+mN%s^\'_LX.{^zKHW(`#'j7kô}* l0lQ+) lKKC\(@ԸIR>0I'd&7^\k#+|5B`LT7- OhnGO&'7B8'2*Ba faHih0)Tdth4cd@LF\[/M'Y>rȘ z %~6 Un-xeƴ ²lc67JL jL~xww-x@FD>qr΃o[۽ѯ жF1oKbY|O^et>IT ӫ6yIQ^+ʫW;e"XF;lMQreH6E}1V,9I+}gSX~{OÜOCѕ:yMv%JU6dmYYwDRWWrCAUjXp=VA1(p׻`UW&& ,z~̔$-mp5[,qlo9(gktBgg#,yO^mӒ D 2 &`sE`]sa^ E8E 왥q^W'$Prcۏ;`\޷ uF-J+ 2Y/]U-,C )vRZMp{qLq#ǔk^ɄΏVt9owy! tamf}%]Y,E,6w0MOu90{ MEkĈxkz{Zj)DzL^=DǖaUz`ΐz,ևRY.ƙ 9k%`^=|qj kF FMhjek73֣[[V}ե>_|e$ա#ʺ [mS#-aLUsP=.ݒ!IÛё̜j[qf:f 2e lilP֜լ@2 :5dQIL04zw7N~4Zς"6o@t1YDftٱ(ت&Ey5_:VAd)$VXLc]Y҆`̃vn)ȅ1C.\d͎ j_(U<X%SXg6]./fj6%Њ_RR}<FZ;KZɌvi{S.k%CJ:+4bWzL#$ XcaeܠI3Vۜ%]sj]ub3|uqcog_w9|qAClwfNoj`+8͒HЫbAm48qA+XgО,vAwL{f mmR d DPf{NuZƬ;cr,=>bFZ63~~ڒ' 4]l2 &`]IC3Z jDm H͍ir:kCf:DioPcH nl䵜vks0)*=m%~QJ`O;`Z<}Jl20m}2msۡRkO"L̑ a(f^)bLuO(U(*8Ƭ ď<O`d7:gbA>p||67r88xcl놶ѱ^9=FũS@Y@3 PBd\VBgiNHAp 9X4? ?e\XŌbJ;E i -Pb$OenhOv:^)Cy!*F b2h(j}E5pcč-Js~ʋ_82*s"CUo5ݒ3i݈ H+Bf"ݼi(x40H̔@V1ݸQA޹P^lrS$Y(:eJͼn8&F}`R܈bIkQt,nbdfޚds 9muM2Fȭ&9W(Y ɛEEmy{wTfw.ok&c IUYQй1Ԝ4-jM\!,dܴ4[$Z25Xt6dݭ7%%@bƕӹDjL1 Dq%-sqC#K X=q#nMXQ0E4$dTmy{wM9kf#͎4Wن4W߀ \gۑwajvBvmH/ rq^!h) ɟLbM!L1K,f(2+t؈NڧՌnGM<߽{E` I^Ѻ[[Z/:^,NǯA3] @ڭ~_}l\IJwwټ{j$ ;}/"`w6-6ܓ`G|:y}emseK |Gz_o}\ .Ќُc::{{urQ+{]و=jdx//:,fogcrbªIHܿMyɫs]KfRT_ݻn')a^Ү0#j3J&u?ŰK-) L۸t۠g Pycяߙ! (X](ߣ -Lde: TZhjHMJG9<_|K^VDiAY5oҍ3 6uo5(3? YR>7eR>qv7]/R; ;dP{,~28ዛI0{d|/q3{o 9}]9̳` ]8oWq?^,P:7Ag,(/*GdTh#dB}pa,³yLU;r܂1#{179mBz7H~+a-I F2g-j*[8ߟ٥Ի 8{ԻM#MQGUڬz y"|0mjG}R3u:N?.p( < A9n~4A[Z·?\ʢŲug߿WZ-tzq|Md ¡F1ficFȿ|-sFH™G_{pFrR TC-:$za-bda'dOU27!Zנ̆=NLW(b2+r6iNҜX&˘땓cnGW ]M R|ā<qhbH3z` w`l$;&@%UO^橅av0cIٰM=vb+r|=dmWȾLR2`n{N$|jqMIփ?t3fL+ǑH_l/6x lOa, 1î dl/oPR)fRS5tKYd dF|155%8,|C14Gxl}ٻe[s+[~1T] Jɿ7!(UڑI*[sKRIfnk:ahtcW۪_i-$b71X~p3QLm><޼{sL'c{:ΟDeRo49pY*-9$1@1\`Iz8F k^8k򌊰Uyd֡& S!%;Ò!×o&7",T1~rEEД}@\FjG{rX߲-9,%eJ̰mM(vP.nTҤc+v-/n:SAWxd*VHj+4SBV\Rd#bFʲB\pF΂!)f G:a2F'#-a +0l0k_+̬C'!V8Q4ǗXČ":96(HH-kp$ZU` 0N``,| T4UmSxXSS. & ;`dZ XDdi9g &4 R~|rSS*E;bcCsl% Us!~JG5 G'cX@_|Q73_-W^EX ^8H !N`Xr!;c ξcۋ^:ljV>]gVrǝ65`#XmW-CÐVtY՝m 2Gq:;bYXcȐk{Yv=]l-pA%\ ?t 1kIp.BHhTwѢEf*7]R G࡞xylZۿ85=͚awgaw,ʓJ)o)nw=ۻw q]+&vؐӵ 3: p.!j+=źmvg.޲S.ǯߟg p7eZ Xec0&+M5dZHIF >`+1TԈ8㑁*ov 1 fprF B?+wԮL'[#.FӱSHpފ4GJa>AR|GYOdǬ"둔_.tYdGZa\3 qᐐuPZT@#G&I[8XRc(,@gm &󵶲ZZDN+&XAl4(hdcdP %Jr?sW .() b>PLo9:QL{ mX 7JCuln"9gB BѨp)U.)hv,䅛M6Ԧc]rRc,GR>RY@,ϝ!-ͯM`Y~aSr?\O1Sr/[kJ \>νWXN RJ{;Z 9u[Zߴq]yH~^ jh%YIiVu+b.{p~< nGVRQO3BJhO>Vc:8kEM=ZsYSl - fH `ւU R߿bBTy0UAFωK9 mr0GZUih f>  M-I"d@NX8XqF`mm1ʆgW[OϠ-!$/~N_^<ݻwVrjH'8F{}*{mUCikACzmз(f i@EZiyhBpppbl-PH|ӢX+RsFQe$ro %A)"Z#ySTɷS0o=+Q1z&֣o;dhe*W7گ y!l}PrJNk]< aj&L @Rso!" 'cL$J? 6:\m)8WK4[AO:$ҁf'`!/DlJV7 ;VAdu;tJH)]ڰnE6o?W7!bc:}4Yw:mu+hֆp=ݦ%Bծ#%;ᆽd_ޓh~X 1A#hjK,͐CHÖLij{)I#gj&GiZns*g E d4=NͤJ&( ,yM>W[O?fwӳe s)iQO:i.$e=[PIuw9Q!}Ƚ DrHx0v-T2 h9JU4J5WEʡS4w"}02kH,;z[Ē@/6(g;g3pg$mX:I)Y^4a)ZtOÒ/*AR6w3*G :21O_}X,Im {sbR J2:]RVNn@?ݙPAф wOxg:j9hAHŔT&isq)v_%9SW?!& 4+&vޙ?Xġ>?1z<:_&efs${\%aI3XYf^6?w, =} [ |e#P蒉NSsSWIPO+Eş4]|4cTt_"KeG辞!"Btalr.d_OP{`B$dO%0^}??\~"Q}d mm-m dEt nes*yv=Xu}G=}hK$`}cK"CbJS*Y#h9m${|xGA1@L1Bz@DY䆈-!hxV2!i656{+Wù 1ڢnQFV 9~ܤ/&N.w*w!iv~Pca$>v6F!9 x% Qybpֽ;ߛd;Z&K2obX"ěWat?}KQ0^OzuLEizw Bfquás~c~ C+$N㩮ejL$D(;NL͹^:/"cVf zG4f4F91lIJY:q"/9Rk j@f.fܔJRkB-imga3O&k "3<5Ώ9tylTpVapP < ז(V⪡ ÜR%XK0wh :8j&5XښRupA6*xO󑼣X)4`疶 %0=Nn ŁZd =0`JZ`E& !1 -\DNrŠZb#Qb- ׅO"I)-MxIwYӱYSl~cq+wX: rFEh+OӥjIX;٭+8'nt?.BsepҕJÉg@2p j]p^".cZǐ%:8(F20Vx07`a 4y'4=( ӄJvprǭ-EUObcY%AX,԰8&!;Zsꉭ= EkDj4Nś[,$NZPZ&C-sHB-ltʧ~Lw [MHgخ;>Y4;N ] sHw+,?3 EC%0ut~O{.+up f70r2"2fyB!WC̈lJGe'n {Wn1NhQ*z C=dؔ$댥J1ȩUGo8:強oڰnE6W7VR du;4'խ%Gֆp-B^h:G>LYw$/(^=|&(!fnh]UL  ait>uM:TBqʆ=oq+s k`ZONf(q>&0N1  tA(~rB_w6tERY(}Hq*_ڧdpN^IÏ?v(ևM`km֨n)hpaqjY$ g #١ TPa;.JhRC)A tUz47^ݰIj"^[,=}pw൚I5D樓((@X>p2 K+B+ 4¸v8P[C,hz'k/j:G~kP!G{u0o,C"]vє иPV#_jȸ#VJ!}U~c7\tWn$]d_wc;CSj7URPd_򚡸@}ߨjE`=:Yrxi= #qΗ>oYJvE"'m}Rw[sW2w1^y?g~:2w#=~_ߎ柠)2 .z&uk1]3 2^:vٻHW z1fRGb^lqt˖%adTRQUYnt%U*A2 .ޞE_-Bz@rgQLB֥gQW |yyCiywutϏg{OVqM1Z(պpwPY2KJF;|Fh @[6Uk_s d8G׬&5upI20TdJI 2k텧LH"%fH /&wbafx>-P0\{Xĸ0S 4!D/OeQF1UqeDNk⹅ZF.Fȇעx,ts-S0XƒTaC$O[X"":*qP9ф(k=K/v j;X QڞCXZ#Hul,_pɦk4%C+d0,oQch\Eh[lZ7>&j@ |G'Զy&78B]=#h\Eu_k:eb&Ε~Gn?<~͚&T}ّo:1 iļȗ"_b^/1W#T1x̘ N9)*oVP`dȭ 6|ӸodH5:-0X[9&^.>u] _s_)u^&|Niӎ0&Pk'QPgM5X4<ͦ;1]g <\3bPԌ,Ŀ*z (:ǂpfu p;c5(('agz!NC BTZD BLo#\Y7=0R['SG{ >~^J ޘ#L 淼7QnJoD!XƄLuPRqG@!FJrsՅCvH7QNGY C c q9QׅbJMe*B?D4jɍYr:1@<(me# @f'UF^H1,uw@!޿sq4,"o)c GE,*]uS6gxXuTwcB0ع|kiOC 1r2jC7nswy[PRT=%RLJ\aa!.VpfCUN[qr p;O.Cȩ/~L;r%%c{FTBfN#c #[eʨ!Wq4 E-VY'bz'zam'؅u?͟ k grX+N 2F40JAAՑ3^EFfJ|WQَ q]b {y _elUX[pokB-hP"T'sL!'AҲFGo3 Nz4Hk0 <%>H ScGvhZmBA'nR #5z^D ,\ia C&NaCP0 !v#y׋q(UVY_^1a3oog)xxϠ<3a_}{ItL0*&//fnXWW:gJkDng? JvKzt6rW/gr.g9' ?Yk+OlxĄt-!J-*ER:I[՚ٗL$;gOXL6Yz_3Wco٪KNpd5 -e%%7Ut<BP e}G՗{NUe[ vuK ̦5jMjjB vX@4 VƵN.4# 6r F }5gLb9DWQJ_`]낳 /8GTtn^(]'#ŏW@CJH$;qzЧ^<5/˦Ny睘yIlÓZ6}]׺~t?5&oo)}Ljɥd\1?6<2]/n6>.o߼Bfqm͛]->fpf ;{D{M/QWu&n [{pح~l hkK,{I4>a x%@!t<@ee"uNB-4w6ܧ#:K8f!D_qԓ)#lrjIX˒=V|H_VM%9x>V#k&'z$  ^xHy)9"<_S^I^cuJ:a"1jns"rkPza^+4cg1~.Dfz(um6`_i6"۠l. h۠JKbTAӑ\$pirPL~KG.B.ӮhiՉhm퉅Cš;ך9PY_>C:z'ܜ\Vl{olAָ*Ol) Iz#@wJn4P,C} V5Xk)JMmBFkL.J(;2{Y%8vilp$rON],N\ ^H-5ciPsIk%cҁcȬ3Ld*2Ib—I 3 5 w5 % IuK#yܱ=IfȵqXG!8zYSdd5B V;E,_,A3 >꯳Y\-wEz)9 و|wk)?f_J){(5}Yݤ46e*(q5#ޭjVJ[k<7·Ǖg'ɘJ@sSՁ_mm^S.]R%hw0% ~R9\4}\'*ﯗR LT'E.JaRdT=@2]9QEPveAG X фx^p5(PFg# >T%eU8ie,n,W֋(Gg9v.xƈܾ%d8:i0Qyȇ?W$ӐGb.]b}닳!̷OW7& 27҈y[nZiE(;V<&}__-g["f l]wMĭ t|\q͊ˣ_@\SS|ZW\7^V_ y; <ԇ}.v Jꖭ31=\1 s?'4щL D*$Sk239ФBEg0EI>Yot@iHI3 Fh.l@jM.\_\cu }"uKcqMK3J'fQ=TߠZ4$zēVE+\"^ 'ohV)%zݽ2gԟX@7't Z50+zWm,^?Rds&vr,oT+Ojqek,)`ھ~%qTDݻUx0-4|:Selů>t:՚c'`p  YMЗמ˯ChqlHATXeEti=`ԌļG͓5HK;LY9z}?8-گj~~uR)g~Z6s^e-1q IrՆsIq$xZ-!(<Z@NJZ(!8rvx$&O iGn>4|7#IpwE.é=ޛ7ݭ_7>AwY=[YsO0Y>ỹ+Vqi~0KM7?"c&z~"d ,j,MQkj6]V7d+s3ܾ;%ʥ>mnH/{EUd)f0xDuRo`T["4yh}hyi/Ä!3^Ä!(0 | ,r/?C!ꐧ+i]='[F]>! iwQ;E]I "^~Jkj$G4iU"8&q׉ZEAG-ag7LxcrIiüJ(1I8nGMVܰ*z_#lXH6U9V޽%Dw. \Ӥ$LO(Z{fILc?7qxD415SjgHiN(KQ̦D|m5mK'O "K\ y<.S#CWhPVrr7ncbdrPt' mnlsg?ܔgx8JϔB3T甤&0Zs N5;!T.t/`B_dZ AW);,oF޺D,-XDsܒ3Hmv}04r@0s[J1l"4{R|rY;>#d}d94͛jݩK?d' 4N&ma8<̧nđ:^!п= 5yi+J BBIOB7~My*HS/mH~ 0W#H6&Sbr9QJRFZE>_+ITҙ3aSH(h(S|24xwR591X"c. j[j^b 57u>3CAۧV "ӊ=!k1Y7C&4QZE,z⼍w``5TgtA9$CJghQË[& 0V8!-Enr,yN-<|h,Y@Jrxb4}|_?]aVX(o(r5\Ɖpwi̸QRBAH`^"0"2@= h;TB|UȬlbE_w{JK; ~_B@ [k<C)Z|j %TB6_vR5{9xyj0G=^  0|p+.P.y; \jYLI?G d{IePi9cQh]+V([TFw{sFtDޚ f5]C(ɍejS#L5N1 )B)M 7E) BZ$S<$cy@`ZF"Xת6 pd;sKZ :4]G5(бuĽ@5;.֢1R@q%%v g] "ȩ8hȾFܪbjVU4T ̱fGM7{9<QPZrGsea ɅQp x?i 88ud  fhxU+ o=+IQW]]۬-5C\qUOR{Ɖz"[KzH bɎ/JC_*}|#r%=! ~ f Cwq^ctyl/V5Fg;v/I6a\o=#<2ÏP(Q.9aK#Lg㧇V\\m8LE3f7FnR{8f3tJX+"_JvY[\? s`~_l;eXǟ,YO%U*nQ6%D7ȷ-Ӊ}FvR6նw? ݚn16%) we#zX BL'UvN5̻Tz&,䅛hMa{jX-Ӊ}FvĕpЩy>ޭ y&Zߦ$GWա{7v\:DJe3[]s޵(fE^Yۯ{\!'=XTܣT=vyL @n{i}Tcݣ* PwV[[<-SdG Ovi3潵}bB81&&B jat0*gʖ'GX U+KgWwbt_Y*(xT 0ݩ)D ~i5.uw5I%"WTvm6{pl涭WW1q Uc\"&"qI!(VdȜ0GkRlP]Ɯd1jr'hJ V:$)2ۍ*yKJX(2|r* ]ڶ8cV,5+)KƴZpT뺃B}#ȠV@=psX5%\졽$kb}M;{)*O7 (}?M0oָ"znɬR ~{p2~|/]v(D;%Loq4B;UnwzPjT\Q(5.V=RgQ%ʨ&e;Ew(=0~ ՄdB h-PlޢAGq(8BZ Ei?b(]C D}wk&h nJEl?iLNhi;svGy/oOяOl?>Ŋ }d%+4)D><(|Q[ mm!4&R9V.hg4JcKnR2)ۖsԘ[%l?{MG Ξl_:B54L8g/~a<|3m`>W::wӻI}e(2xK_iQv'x;%e+ CCY1Yz܅ϿYVP +$6j%,:aΠ$Zsa0[C ĻǸCīסHsȼ׾+.J@y>~ w̉ ^4 O }\ȥ"֩yW( Bz0h?!&xxSy=gaCR-7,VᮿS|c{9 .:TNy9=ZZ*\h lE'>݃@W#ݚx) /OVpNV.j3M[LC$m}~_ܙ>׸7mc%/w``QQ-xM|PBct 8X!DF#p3\ETea&jl뾛 SP_@#i~H4K8+d.ŧ^aPWi$ntnijfmlgfTh U^R% =籃El uE$r[k98ܲ\8# C2юdUSYpnVJ Y*P)F=~DjՁ,> 'n@H 2W*!% X%&SYB? I̐BN@jf-՘gDªLb3n3#AQq DSm.CFY@# m4QdeZVq^sS+e rA,ĺ QdpXH]Ohå~)5e<ƓBv<[ߙp0c$p_fZ)ޜ?\̍~. [o.4n(`=_v=[1B sȐˇ #nA4jÈ/=X?mȢV6 ۥ9@tjo~Р_za/'叉p9t' o|xޔx8J\jtB߸Dc.tι$F&&#"geyAb2R>ɿ+U1D|k5ZA샮6 &><Ӽmfm2g~{}R>O<|띊DHէtnh^LPWg_DU+ w?HR쑺.( @4J⏧spg-Z/3;*c$XP!x_ϑzĂ$XQ5T56_,J0!3C6_=Xu- 6Ӓ<# ˬRsU.Y͔'T_h_OgP45.u'{I|x*P$qs.F 4A_݂D &jEH>__"屈mQ^vg':巹`dj}6Y͹E$Z(9=)}v/#XNyEtDcs,IKHQch1.Z:1J/RFET;7(W 9 I;łÂ=AVav) 3ۀs_a, x͙F[1keiDŽ^A1V<3@'WK1ƊVܙٙ\1OccBhx">͒a<1%KT5lwߤ ^!A_rS*[5jGR 0{cqEb!{}2gY׎vʷJvp*Ug.jCUduw3g~e c2䒈 }Z&g Ӳh%i OƲ[C߈ix ­:xlh^Py>?yd~{B<1Բ;m7w.UdGoo>LcrrMO`O}()H(9̎;ɏY瞀IZ#!C5z=D:`D΀1G\|`n+%.UTk9#qF9U !dPGsjiLa<ڜ=9&wG>r&X2FAViCdZ.5A)``Y :@)VƙRSM*UHj ؟4yÈStHuSל{O릊1&)^G}*$`:ּTi,zu;Mhc7zd6(qI.h(I1xP2vc>.ҩ~K5+EκiJ0Ag/)%TƱHf92C^)p s=Svf.]XMx$Oiz>̰O|y>F4z7נ^ cJ^yiv<rReIbrr"XW$ N͒ZVT:G^Z?uԌY9#D \X:rеֈ6w g4)Z<))]iN_o[ml>My/bO p,bED7ubPT0Y}qw0|ڻ-^E㞲ErU\$Wq\rXbVx477>rKBDwxPv?*$&8BH}qAbBV "Y6kB(dDBeV$A sT 73*XlY%aoRf \#`-RCdLt3bZ1m!?v aE R-]>8baʑmI?!KF{ՆJD[FT±IP6)'`Kk=QA(=sPt"vHgL:vpb&X#^s>U͸L 85As}T'"w}IR{CVvXa݆;~GU8ZEWE~.>|zSprt OFKHægHkܮ;%1Xqk}iOwOh蛅 zꝯG6NF}Nߞ.u tھS;;=nnvM^N2u8KKwުs9rIIg o5mџ69I͙̕T3r3K5VPynXRGdɀ@Wzf1R!^yh XPeӣ"҇< HH2稧6˕495n5g1 b~i&˻sQ#>}jSH'?~žN/9eӂM_"JI{Dhy()odg7bᓹm @Y9qk"0Jo5cڊPdeP$& C]I,)QL1\6X§hgsS&(% Z i*C6$<]N9d@Ydϓsi|Ġ1P]&)s}9=- g rȀ{$==-+W;([O[|ʠMۇ2hWeO}.gv i5#(#rt OF[7jpTQ̓̓X̓:bHaq}򮛛,$X Bۓwb֯KML }@.4Ot kބHB.BCT{JsU5wUEHφ2-0>nRBY3ֻ9JMvdx$i}]Z1Dt-G$Xӆ;3sNz|L,GdQN& N ,?B uB*!9dXj!'NRa@hChg% |.1!cPUIu#?A &SPg}N$ar;Mv@a4P{7CĶÂBIn' .+=F)<{(+J Ց!"-/j^#P 0_QMJ:J(n2x`IG)Yݒ"*Z*zs̨92i&Gn|!Պ1n-zp[Wi0!V,YKh̝4H|`&nIL ZQP n<e@bױh 1->N/m<(/VE5o:ڕ{Bo B[R EX.uBs{K2dpѨ lRq_0Uf#/oUS?HPiM#-*ž=4@>$+d`{BAcc PI y8,p1)q/#.Q-_}6).G)ZPzRW]݁Tuz۶Up^R^V5.t$;;RXED"< @б %lVg}GUV&Zb 5 78|Vf!M@zΓDl:\JJ|0ɤqxǡR-0fbōx! /H~ +XkEz|aMXs)a@*/g8R:`q3 I"="nLl䚱C*kT_(-Bit|l)[SF!a& "0C&) DeR ٴaڄՐ|zDn1<[f+'fG6ED[@h Y 7V'|1J7#~vOq(ķr4!i=*(jQH{x*E.eB9ǖ-Pu^Õ p3kEgH6K#M y׀Z ]hڅpw%|j!Z]>()$FHsyb;ߩL|rHM!-mEi-vT|/NkפGǣ {˕Θgރ=b 3%K(+WU44?Dm·Q| s8(J3pB .b <ؚw1d%~T};?ZQEǎf*qn%G1+J<خp1)"K;;v/U:?c6 meJK`>ϑ =p" h1}S~YdGv֡K-"R K-ݗgmrZϠ?:w+ s `:ϱc#z,P?Q N+'*Mxۡs0꒢=oaS.?aDJUн{IAtqGRw#bl mpt"!-uH 3X4^") d6Wz5ZM_~G>xEg36Bv~!SnC/n:Uua;-ycWOf|>ūϓ̜W|n; '_z}2BFWz䫌|a\h NOYo ^%i1۪78V0P NfV "Tc (``g ]nXˠ`$*f y>8q$R#[$1 ''a^)aÃ"ӆ8,RFg`p3 #Ғ6wֱ"VZ2 F[aRQBX,y001fbd;,!V V@p Էg  +EC\,I`1P bQbl`U]1=GN!@bWX*hPJᑐFHbVi%cC>9Ě,4ALpY=[^^2t_{1@RQwN\<>æ~#on䄀/ !_ׯn-<0ɇx9~957 ""C4+V;J_^v!O 8/S:]30arUwF x~ |"=\Wݹ;p}Ƹ\pĺ-7űdPZwa_ E- OG`2Dz2* =,q,T'k  ׅ]} (DHm|Ɉƒ}rqX뺒dPSJ]I%E8ۭxti#7msrd9 R/,b~Hկ7.ޤLy4qoP w9'|]Nx?6%/qr{{5o4}ʗc3J?`1Y2M:/K.iumk2Y~\cŔ(Qe(1< P_|n-#T.stEg8yQ gdlx^vAxi ao$_v&t$埃,vsWTΆW W_H.5 E'DTjɂ{ 8eGIJ[XzoGX5'd6}< 2S(|97p bn%*m>lNa2L&w6Tͦ730Mfv:\ckQ]C3 D k:Mt!ֿpz(,Ru>_sS =xwM%? ܗ iw]_/96/y+8͞Ѯ<]Ѯq("R@)YYPyp[H IJ9m`Eac jqKL[I'u~A(T:XQ|1'c!X4Sʈw$czx-6w^*#e(TZ,*t K1"[ 5VZ\Pp8s,Xa,F4Ťyv"DT !)wl"EnjaXPUXjDe`QT ^,X`,*OX:>s^3@hiH_t٘ۑ[۱%4O':KSy4LAgn:O\$M_qIrr0#GeG6XSH{C,14"eưg2d&jIF tMi pvqcKn̠&Sb}[h<N#,G/TGQqW+V T=v~6~7?'b4J90A "SZ(VBWzArE?[%_;jj w/ j& Rʳ%k |G@wӑMti^_P8.-fr }X8R 3/L巡icw͋ȃMT),bV,SN8V8k6das-)wcpL¨;#҂NS i 4y*uAˁDD=!}TK5#|xCF𗫉>CH)Y=|ۆKᚑՁ{~;4a8`+WH[ Rۆ%j'ZNa%$7s _ ͝F Zmf\[9(FrI U"S8m\뙷f]ğw<߸eꤵ}\vw4!RLJh6hqz?4 m( }zYF!M1y9\oJ߯έ95 wЏaZ`|ϵ% Etg `KK^n{> RTQSmym*cU{E kmԷԄRVg3~5<, 7i&9ٺ@>ǻ/:? wFY`?\=SJ|W3sPJCeluʋüF-t0^*$?8޾%w{5l#zpXl)lF^훶BZO'FKp^bKgFd@2YI 8>(ߊJ6O-G_bΥjyJWaK :'jzɤZ3 JΰTB#j&q+ǽ$ǽ$ǽ$ǽIxTya)k6wy u"ĊY`P^a46r3/jL˫OwC1HëG0|LZmk=zd2"LRYtր,PQxH]|X0uƂEdF×Ncㄲ086 tpV[TgPMJ `'9ԃyF A6XLU 2TpZ0+&NfX _fθ߂ bDXK u'-cԐx0U3S\"LgwtHk4;K)YESr>?/Ly⌞)5)-[O([ϯ>$V[ ☣z◘Ў9 evjewZ?=8UW̩g\O r1ݙ׾'=:ӌVʈv$ETW=aC0#kjׯVyn{lW.:URG^]];9I)[9 xG,4BW\#-t֠TZM뤎``Wexc*=2>XP9lVoO,M*S1z׽_$ճG-!'ݦfYW‡Չo)$=h:g=LnoIkDA1 Ws p l?0QR~}vFT}+J5:Tq˼d]0+*xc1R5xK'BmQg S`SHȜBڅ|"J } Ϻi[[ rD;X$V[6<֭ EL5 ?վu'nm1cNw 5_5 f!8WQ{qmRyx0/q6eEΑ. zӘW(99`FCϨ!v삤JwIע{ZC@) N6M6ĆUZtem ڌX9chM*ctTrj/k߯T*r&Tp7]]~h<8Imk^u...nfbt[/?_^!dSJӼA(JδQ88IJU2R(梠LQ@4AɔVR5ڴ&tT&]q7{&MP&"•CД }pb{zUjڪ0vk\u2M۩ nSaoϫ: hT\:eq[">mfm~]h PꔼI2uRłg 9ƌ!ǖ~19+d R#UA :,4"J0p[oHi%2ze6eSZh1Z4(I tD{ę&Z[Y(mҕ{ yh 2Q̻0%4AUJ r̗ K$9弔 ˈ~[(`-b5{ShŒ $z0TI~$3X x |N;PmK-Ex__9hϑ<>՗{TF[b_M*Q,SQ I=-0pׅ@" _bwV~bsa 4$Iad;ܣ3rg$3s&R#{ЂNKjJ?۩^!Cs+oܒK;2՗TKZ٘Tv aיB2YOYtYqA0AvmгP]qH/ri[p{^GmVZ}^6(>Z;SO \n?;6vT;ޝLcG1ȂFnKy6uho_U FUy2G.S)Lʲcۺd=O- P)W6M~=$QxZT^3[O.ՠǗ=`_7c%O2M –ojM𮵨fFqD5N_9ok2Eʤ]dj{Nd$΂`+jɊ3P-B\ lp-8b1b@}{[Z3`RC=8[m}+^2M`Ϋ`PΖz8K]G"hWͲcBdݫփ Vfs_ u֍n vsʵ߈ }+YUGU#GSlCj!ݒ0F"súZVg@efSrnF[,j[[,\eAS/Q}KdlS=JRTھHN?~|\yay[E]L. f*5P=:_#FS:X];0wL{(rDnNݢtK,mnӭ_^jq=o#R=y1xdk~vsI<8WU!*JђFZRyЬyn;ALjyQ# J9=8oI7vbxK󆜍ˆ^\~okKS8'Yy_е8__ 7ˏ+T>m9tsP9vSKtդR(}+ s(i­FKbćeȦF+nD'gIYh S1.ROXkAe0^ VF{u 퓥pʑQLM^ݱ8SMq`7BsȁJa@PWb=|aٺA<4{wt#c:tp5??swW,<9 kMM] q1'4;j|ǘ<s pRMW#8x] ;[|nQK1 `'}):~G˯kqZ8B"0׀UI?^xvS0o{S)eϗ鯧'XSA)s('EΉ0bABΡ9+*2?VeČ e/wJg4R1h::b˻"k,{}qX 43[Qjd%w|s{N J&kA"*+ Yۀmd%˗eOsOsD_v?XTC. {ZŞKqgWyLRs>C"L.r6+iKT!2FЂe(]!,a OJELtIc0nTz$غM7w,yǛgzCF%8[}SYm3F}<:PiMAWQW6qBN:+*G"3!H#{.!T6 1tO&i\(k_rjS8~\KUH)ǻ,_̐SXAi$۹`#we]8ei})AR:]GxaCJڼ鹩ZZۦt1 ƫLv LmHjiuNEŽ=^XCmܕ!BjCՄ!lRg/ltYԇHe 2BƑ@KbPOaü^鹩@_.ɩZЮZP;lƦJo@E@(,*H"C>fJgēMJ4&P Cƫ}ꡁ\|Fkc|+D;ӆh;k>9k2%ͻN y\+)G0LhN˛ǵ"* 7Ec E5.N423lUZuʂY簦ZVo: f/an&4U 45qE*R;ej8Fۺz=:ПМ⇵'4E+x5_.:1*Q?ZCa_ygɳ# 藓MɻI:>O}γ;S$ŊqAFw긝4ܟfL1A;eP n@]xלjC%4xOhAÐ`J,qKу01ceJS7q"A[QL)0t<HAmRO2QZ4i00σLu4JTUy)C+"<͗1qw#N_GyX9*O_\:dF.yOzџst]?ej{@sx>{? 3{3Y/p:<'i݆ڀ,sOA^bͳ"ւBQ- –n({~ʯx>YËXKԤ`tS>C>U5/w}<3xjBEadKdUn]N;Z u|ͬazۀzŬElZG~`;gpTu娳ϫm<3h758AE^1Eڋ6 )z|QBuoОzk0+Y߭3vOlZd乩Mx~Km1ue 褹nt~zp.&F'58]9ɠι^ZZ3 22 jswmIenRr&N_̋ZIQTT8 h#ĖD====O fS!<%=`9B I*?t+i/#q>?]4d _2P- 7&!kFh^fWٵZbL0tL\Q9t,iTJZ+7߀X"0pX5- rKS-|~nd"$iH gICy[y*y-Mƿh𷓿͗_[ 7b4 ލ.y?y2S7rx&>}-5d9 qfW rsM?F=De@LGruڑ|*SmsߖuH!X\N;X3ڛu Jn]hY:UjĹm8CnZج֬Ԕɢ96+u۬<((%_Nuɣc1s؜iJQc~: ?:sZ\yPjv DZz\vO;?WaةSM=)h)(n QX8SḦV"O@\cp!j!^щ?^fj 7 _\_džStlX:ʱ#JŦ,(ORo+9j <,';1SB ^EMimKZnmE;o!퍶$Np28*p۶T8ja4,qD,&YxIieE0xco- B㶭VNNĒ HRQK2KT_ԩfL1_j4be OLc$MVTk1LcmLEZ*H|g=mБbldv3Qw m°D/oժM}7%-^?XYB!Bi"xSG|,T'],:Q/~J,< \pnaQe/ѡd, ^_ ޮPP@Cr]_샷 в^DiW_m}onq*nOBL;D{缘A._QAJwָ DsI+F?~sF5)7ȁ~k k+&?nj)^ G$szjz'oK nv93q YrTB&(NV;mӡ,HCǻl: 'X<61$9si7R ODiY].I$x&r.~|*Z (/J:尰^:fp쏉y@c}lR2,h d6n?$dƑFRwR˕čZ-c0:o"&ad(fJr4pBf:P6҂UujWJ6˷Y#x0kD aTvqK\ !%̀hi~P-hCutW-ʼniLfc]!7'Ӕ&k\iY,LZj㈢y|FTSvfb6өآZ9"qyƫ$bQew lI_K(;ߜ/0oI蔾'fJ)&㹴/_*fFEG 7bCק t?-7^&N,eb6ߴox e% xp[4oQ$n% P'k¤ ଢ଼!`[) DL+}*ˮ >AʳwnYX*eG5`AX;?Ux_~M3HԞw|@ rMSPJ$$@#`t/m*gn0Qǽ{&;AKԑqG_H&ꔤ_@Gu tF0|y,?'_?%1AQ."ߠI.%\F</Z\?—zOQN#mXzMƒxP*i+C R*`'OW;AҺmg}ћɇubH%|(Ɵ;'O>e<];kPB:xyU9}9)F~Wyid B:\PrY\pCL)̼0XDbnئ, w9 YNZo )*duIހZ;52һ^ZP#&]e,VɗQoLjL /ǟ%ԉߗx~\3Pi5gL~r-a_8?P"L=r!8&~ڝ<q%P՟GUFdfnoYqaT#<)E3|Gz⠘mmb}z_,lZQ4g[y&Ījth79%+Bx/l S2eTjJZh鈲zO}lIxyH*ID[,Z3J3WEC S9eʆJ˩DXJ ɗl#Bt74F!f8Y9|QԗQԗQԗQԗUQWw,F9253rhjIy բA#n0+wr_Ut`)N8^nT˽9ZWůR j:uvjߗ|KZs]N+яY_VY29af(!!λoo#ypU%A5{$$k9?kO90@6 >\߷/#;OwKXÝ+<5g=ՊiM~xudr?dc uliuaJn{n2^W:& 4W3_{eY\RB#q  :&U쀂vJ"/WY8NJ+&\}|I%ë%ZiKnM?( ~F E:$PcID` P)C0_5CHN-$vhDY3PRXNj|h\( ~ ҠJp6tFDw>j#D2Rc.gUYR?LjB%:kMкknsePt~v9լ|Jb|KtΚ"yu`͕Aҙc%cYͺoPiZo|N^S%YIuWϧkF8<@-H B5Ã4>o{ 0bsCۢI*LB0HC!}c|~ۿ}~6ڰÿt|uvzI<p4+R3QɀR*iSRW?"( n.S8TjΔ&*D%Kyo+nP+|Un4vhp|s֣)*0tlN|Ӵ7~cIl=2Fg lJ(% *ѝ-#BQ_62I ReY%Ou= !HVJab6@>@8M&D-Ib]:؋D`|{q0"c.پ} )nn_k6jv4/OЬ*TRrbE+1S@>| `w0,A[;<7D^PFe8%j/$Ue$ s=(0.e UpUM%']vOD\S'ך;sl5K3^ Ewvw۰S83T_A>SmvcnMz?шwF4Z\Իb?gQ']:R8.AL;|^T.<^L 䬾SS(h1g1'ODKsZ9VR'Hx;[Q81Yev (e,a]N\s 'JJ2WP+m Xf~(TiqnC-[/Ykj}u5.hʠf1ra=ܕ֭ Ɨ5E=CQ5X2Y:scudֳnK*[/kHR7&魻u>E[3^PRUTCB}wroPKWCa%5ViιMMLJw}BKUMe!ߠHUQN+]ϞRy;-}*ahm-`Ç9qR/{="7`fKbb뵼S;K^  オTKuA;g}dVvL١Fw-q+U{ijcVBNLolF;EL6w^NuǜQZ5>S=هͩ :hˊxG-&oR>qZe訡 Qqx(ٞ'gGh1y '||}ynF%gD<3M <9UہOoێAuC~eI?9 2yn?YӰ޷zea;р۷:ZmVV6FB[핾F*?SB}w!!J&:4점J kaъ>9fcʘɳ hD!wErBl؉bP37#Ac[OQX;Z R]κLb/nO>!9۩wƯjg4_3±W>pa^ylÑ#/Ч7*SzAlLimIe}fRAv&@U?TGU2Q!%/!v /!ҁ&[WM#j'|n#If=[wû.~v[^ێ|5iwh5j~ɸUwR /??Ϳz߅HGߥw9W޾?ya/]ui|Ҧ=.F Vk2K{h0t.3Wgoq}ID̮C LbtGbs|ݳ6A Z/-念i:*ra7L Jp?@W0:*(FG)%#+ˍ!n T )+P;Ah{uȁ2>cې:+ޱU @,# M*i<5G2'9Fq)F3(20hIDUBVdv+R2S_GX)DH~IikTHH\\6ޝ.r%Հa~qBf}:õ0ҨEX_W,c=G -[]i "++_qǯ.=dfMT9HK`T;:3x{mq iC!‍FŲBP:FߧNbu5 Ci{*fgsxRr%Qh' 4},ų\l5eF}wڠe>KnJ[SYB}wڀx ,+Ri6vIEݯn vM$%3Ӻϙݧ:Dޒ6tt HݰaP`*0|zDm [ QZUaPy=vt+xnT@;>"' ;n5q77lE7 \U,:tܰ [u@$B,1tGJ郵s0s Ox['YdPTͅ6A|e\BvM BUiRʣi) 3&3'P^%,1n(dO$|lO? `İ#Lx$$b@5BTZ>(>`p˾2hg&v &2COޘ%6_T\y.~'"d%!u65 3Z.Iq|=ۖ /^hm dl. uO:!@M `5Ng`q&;sK# s0w69Áf ˆ騵$idY+ 1IP/)Q Ȁ -fF-DY+b@kQwх^B gKRgj ѺL܍ؗU9 CyQ~YZ4I @ Wc ԖQOdfY:^hJ NTRTfxvY$HMԚV+ r#`ۙW5#3y3ӊ4):cH+ oϑv#3)N ]orUubfZs}2DvR׌`k>2ٹYz>De癗HGX Ka+ⓡdm 0&;J91j'DR)hW"joEL[͆=c U$tyjͶ|߀2 NeMI]ziz>ְOD/U tUe2ݣW6pI6tڱLJwo#JW m%D*e{4rI0[.=2(L]Û95LolwbyFJ^7NgV -I(yw1k7KIhN) Gu)qD:3$*F8V<)ټQ%L]OPޣdP9e89sT3B;՞Zt*阡sلrj3G|>?‘ceaakNm܏B#:G r ŗ.Re 1My5rJ;n >$Rj9'U&̒$S'2Y>6)#QȡI$}!Uu&nSc*p;L5إBjn.J.r;!85k>j^ts^#*'܎|8Yzj-nwy띥Zڰk [OrW&xu=9xEVEƬ(ҙM|[i#W,`J'^$3M*Fx6\I1MMjI] j7hss^#&m*-XtWI[JQu+.]#j(+ť.ݻ]|ބ}_g Ddvse%D1R9,_%λĊR CoJΤw5#9<2>]#x;n5;P||QFvJݡ=B8=L^Uz=!ijFFziR,%+VւF%]]!Y^va4CҦR[qq$bB:8 %Qԅ gH>LRv$1}'n\A|8t~,jlO>.wٍN@Nn&ZLL.ךĔlz>Eg}׵u7vwU4oNL1m&R &6 .Nf͕O?%>k1=l}oW&P1Y8#ګ߮?B z$Hz$Ѓ[tG 1ϳ'1LɷYskVBl:7_ߍ;Yi.PO78,1|`R4ʒ* gT;/'kX_,(y[ XqrZkyط2=my^qne/sbnܣB$uAԸ zj|n\-Ut҇^ۑx<ɬXtu`~3ۃVLCw}kT)Տ´GssNh#m[lZ@}ra#(T, "uqt"RĐ8z7\XYGYkklwV 8jdMHpg1t_79nxstbRj'ut"KmAr˻늣m2>dyWpOx^,(Y*N4q{2ߜO.ދhy맏ZVOo\bB=I?,."ȋby-Fd0=0pZ9M$垛A2mhm0KK )i9-pGN{|'@߆+/e\zi}hZ|{, B@cLIY=bq3;I[fӴ;:Pi[⨗61RQ#zl*I"H uB|Rx<ЎL[,)i#q)(pQ$ JhkxmQ|{̯/MlV8g~pygq[kt/SjP5_5o??gg7,=0xG? ҫW'0Y'hP&قMo!jW')z$a;C;c0L/˿3^Phga3ʿ[];5V'C7t|8TPXJpgk}v cͽ\HB[RV_⟃qH+HhNMž%(RM#ݸm \*&·,R'x:URRqUi)4ef1٩LYJ@>OPYJϨOX㱾p\<O9稕% ݕyCx9J,uIÐ(dkC%x}ɾ /ǃKOOh&Ly4ntlmlgЂ,º&w6fs=e!?gqҼu oAB׽o_{ߏF_}}f8?5=1{( v8'p}kmVdζm͓kDwͳ.{齳qϱ*rTBG|^!< j<ơ/=h;_Ҏ8ٟSJP֚{X6$/$S򥦩:ϮoFXqm"d"\69Ϛ7Kezۜ9[Ϯ}f&o'DJmNf$0̲N1W1T{ U@yОbeB}ŻٵɽH_d͏J RKvnO␷yzE[pi*-Ri!^ˆ,$̗/tFDJvǴ]qv|*h(%:)\^W}]_?8E༅:)e*rTŻ.nл: PIIFН'kXq<)S8/_u cAϷ +,rYנ>Zcq}d.6m:38$2U68{K´ }ΈžlpHs#B*I& aTq~^ +L6:\VhA*Zd+iN\͢_pf&9!O]mTɢ~Wt+ԆukZEH ̗?~*5;|ha'p n" LV (1] (z<إ3Dv(bUwѝ2ȧ˟F@iDQV*$MQCX`'aM^-w&h1@׃+=hCHv FbZr`:P*爨xtҨ̪+`KFbF9btPJ@-%:heLQyǓGzB>TJz_q+'ehKĚw]LٵPqH! Cb@5/n{A91S}*jF8TSeIDN&ըcTL-%.}#R5:Υ2'Bz%mMlƫ %@Q|D 8n*;ED-Da-PK۶/Xe@4ʄpg98 RLd1CGٲ=4QSұ$1bMhAtQ8-QӘ|DM"%N8)-׌&*(1;ۣ6/`.k4@ v!%"disX ֠m( 0on81Ȕ?)c!\) RJP!jo瞮.,O6ycP)XtģX )Kt'sshV#FUN,:hv=WRUԅ^%F (y;7 QLˑmiqxaEA!aQ[ LeFtt5S5* RKQi<8a 'DEt/rZF@=pc[@ ^1\Fu 0`[K_RK=|5pR2s[6~q3]J R,cT!~]ߘh1W2ֻ* k LXf߯`Gh]\CPTϣ>_ERd|TT;WTP pRy竨t:o|5_Gi3[_痓n]d";wr?8֗./'u 3|d0fWnqEAA|l:'*BtEA{[8!mC3 V*t/mF5#j`|C9]jT\zuqJ))1Q"(j|C|`G"O⍗Z% 65-[*]B& Hq1)&^)ϼf:$AI`THwf6B턧Dh"`/%"c݊2 [l-0pdYP{‘bXt&)]+x}/,:<7!ƭ>A,bBzmjQT j WO. rt^'ŅxT0b{W/:ݕcæǟrˉy||kt =vbj&3P yٻo uh/p:\9 RNeƶXRh{ r }!xd͟nz7_b$_Z<<,˙ T ?m1/M…ܶP|,BEpo:ss<~|5LqO4%!YH]J߱vo1_W|1A4?6KblQFdh8?QTVdQ~l: &0)2Kgr!́;l;>SlL`G'7Yf):':UR(ȅ}b6A=X$BF175T)f44CQk$:)rc8!F(y$uP,`ߦ*mާfA$MtS}M50khHV s8@Vc)¿{~ .⯟ 4g@EC՛31zzjz] Uz8vzIF'CkIT MHC{,uEc$sp a =0µ:P֫k8!`4~)*ކFNZ`t{~`Ip{ uNr-$ ,dG)żao9g!A>r= vAy-:tR8G5PgsT6ђ!I3:Ofs!eIgBqX7zM5#ZTFT@WAPSlcug-,RX9 Ԩb!8^)i 2<¯ܖD^ST+iRH"I47fH&5REz3Pz23si=ZPvR΄{] *qTSY!=!c4L+ҙ@ERN kpjI* Gxm7GpzYSb#sjoF<z5qKDlF+գ:R) ב:BBV'+uPT3ΫJfi?*P_7_\^5½OM͝{ix9n^6xOn$\/߽>&?zVʾԥO6(]>e\1T%IWm,*_[v},Wߪ >C^]|ȵBE ?ǧeAd.͖, REYbn"VT# n"oE :5CשަS sq(P\HP"PJ] ǫ>8bYdVaNTs1yJAd\ E\IPBU"=(=ɩc ^AuyrntN`tH*(6[>[@DncRT CV8⬸8+YNTbG+EFAX)."5*Pa}Z)h:H.&&w7w_70vdE-ȯƚ0{j2*y#3PUacU9 VZ[,3b$ܧ MCMQ;^amb!k͘`$4$.xzк@rhbIqxtݹjhzWT؞vD,nyFx5]̒?Pk2*@8ҌEe1-hB2T-Nگ*8* 0@mGV K;5>+grKXF\Vfrtfob0Vbi$h\bWWVǣ\<*mM{%lL.Z;⎮b P5mm;ʨ&'eH6(W-f.enx#}~ͺ,CFafU'aV[q1Rm\Α#st*5DHL+J8CS!x.xh4COwh$tE0ޔ`;k Ή` RZ H9,HYXJtGw0{hZ 6bhh*8@ӎ]~,L,ͽ)wɨK3w"ȦnK7Wn27،fYƅ#yurIQ) 2m\R(Tm_ҒzLT #vVT3 0Tṩw*>=~)FCgH㲿bssU͙\>ps+ 稇c1K2:p%QNwf\9Qq cczLέWfqvR,1=0CQyr]oCe1eHoWP1Ai~5ⵘyZ柃jtp.?3˦ro.vn{;BJR!&K<19s,gN|(t?+1_RZ one[c"rwħq"ywQ0=v,{(A2%>f? 'O%`Փu+:>,nGp]uԑpy*GJH]c s&5YvYd|UQN ʓ bP<ϹJwDnzpů> {n~uH$K?g-8[<.WRT`L@{OJ>Iֆ\Ln/+Bw'~r}r8kbO]aaUěAv^Ȉ w޽P~~皻{_*Q61F`896Zi^>TZ ` M(L.N h0n5e_Q43f'BO_N^< mB\\-igXksY)+Q0ZQOXb7=(eփQFjTAcn9BHi'| ! 1";rE'[#[w׏VGN5^[Nw@zuzȶXyqc)[/y2G q۰\ <{"k'c wfSp4σ/O6 o#Cx/_clԑRPShԔޥPSjHT&5OQE9tL阃rpwsmيyk=`9BksDPDM(&@3Zښ  AtZ˽Պ4Bb~Z-#k8L *tlA5Jy4iUR@LH$pAҕEB)x{q]0%ΆXj˿X{6+!g/CAZu绶m<2<{pŏ| zۺir3Z߱n(*9[KDNeΗiح}]u@C*X$cx츳cpGBXatH(j@cjo/FhzR<2:Etaxdt^PJhxd,C׻BR<2*֔__VJDt禔dG# ZI{Y'"O8WRVF_ kYkʪn]e˺PZۿP/aH@]$DR|% tQu^ ȉ#JEp1&uvUB2y:=YX}N}n7! M6*?YU$Lu ЃMمG(IhD,>.rhW]<<,sYxwilywhy"Uۛm<{xKsL_&ޜ_.tJRIޕF''O\2Tz{z %pq>d_0YPki˲D H<2Aa!w N2IG O)Ќ3uYϟD%OλkM3Qf lr;p*GJ%9zU8ʨ*U)R=Q6d#T['1Ҡ)2֮2>\_=u9ޙE|hr".#܏mI#ɤӀ:OMCGB0!sr;E*%H P&Ŋ3+XcJ2:Hm#GHLϫʟmU.T6|Jl)) Ii$$d[~?@4@Q/KAYƼӻ -v>5-ʌw'dS3O:$:M? π<x˗`Q."IJ8!3 3Bi&T74b@ZT$5sPU*ԃ g$SR YlvZ+6ĸT+*Õ֓XnVv殧:YOm};a 4q#-͒N5ycLr++;ϩA@]Sj!w뛘Fe*CWɂ,ݞ͒%8tab{ p,x K#Nh! g^梠 (AYIdT{7 'f(=/y rbԼj43ӡC-Ĝ5׀P-H&!;m"3+8Fn!'+ŀ۷L,B-+L2Kʡ%IlT3uSmTiH&PꁒMiQ_g&P#Pj DM'o":NiTۤNo1rfސ!w|O: 2Й ""#gů0#42S;mp'u nlHV#۾ج;zޜsOk zѭ{H!$:3,^3N}Zĝr㔑O?qF ?,5f$k}KA)7-š z`WmYg:aIIce0[/f]݈&:kBNzHJ9%ivPK2xj8wG&+WFhhgBqil!f*G.=(3ȅ(\ M%$}>] UZ)$/JK%YkJk^%ea W{^[-mQ(B@8r93h *hThTP+e'Q+!T>Ra&#S-jGnXFyڈms$uH*$:mVV_anNU!in !n˲-ڝ]mQzz4({ZIH-m{+2ihV* | I)[ #$LZYzs[ĝ4MߜS^on/Mpo?=:FUz/v ("ɶHB|T9rAYiG9L?j4o wo}W+4ġ4-t%}W"ZLr{;W8r~yuuͧRQb}e -b2ʹq\7w?AuNz.JT~W饵v"-[まo¦dWٟC- >I+M".Dz9TϹRoDjxDaw4WJwP0ܛ]\3[| u{{>*vbŻwv̥w+4 ٭蟂_x=Lo2 |iןv:1flU뛛,{zѿmT8NWf[{kҔQ8q"m//3-^xCKج6%١E@F?Өb3mZF"yE+ncX6sD{v#BMN:+4W=%\cKr)c  @ Уkekռf" ͱP2oe#D< <:_y&ϲ@ȓ]La4_.( tΙ.xTLք[SR1{l_ALoztR :/ ǾD 4+WHh#ƾX44 l\zʦ}{$Vkqy4 ƾ=Ggj`V T&tj Of[w51 ~rlw!&HOijv&rQ5FѦ3Ks{dQyҴFbPjDCmvʇSle(M#:}$<ej<$4R)5N)V7~XCWc5)IT MT*9k8[K#$Z0²jS+⪼m.&:ᄝ#~!b_S ~DF+_X]Ylﶬ V.D29cK,* BƟoJyѶ,=OrQ.>3|砾C-993Z*'^'^.cLOik5X3 #{PB]Wj6f6&6!W)zjN6C}kuG$%*w(n.F])oYc}-볬z^4~IpF+ZX^mTD(0-eN z ̜l$^VNaDJyѲadkןM%;`ϭ59Ae<9V ΍P`hIW`$ ׈ƒƗ܅r2DRL(L63m2NXX4X4+ƍz{4^LN &QT>D34 #9Cv jT޲t/%Zt%Z]$d<[$ yј\BIHtdA rI9f @v:o ('1jHF2!cxߤw*QoX>im*4DSgԈ& QIt0-A֨@)e} MnEe*.t[WJ[:ĖL&ԗuQTC7G4(T pux+{+ekG&!v.Ne+٢T;eSיQei\=qS]Z5tjWlvb#U*ЈhOxYiiI8r)z's R3UaGdO$`QCcLabLU >(ӧ1Vg Nc, ŭ$Sk2d"뜬LR;I Xͬ[jjb%L6NK Fkȗ?]5Ƹ#stEk3 ev׼nVH lEީ-lIRۇnPDɌ g3s㲂0QR9Hq.`y:ذ#yŲJz9@b' 뾐_I]WYy۴AbB s_SAHJ|a }c8,ͧI1ARӀnL\>X qWkT][oDZ+ 'o`A\%"ȐKv]qvyٚHv*xvpNaЕzSI0;nj~O"9%m-dR7z.]]8|죣n"t7PQ0N"tmޭnoGezR.菚:|?j tLDx9,֫:u{vKXbnK_Z/#>blxd a"b9. e(Z; sf(rroŔwkI&{5OG7'^uN 1^vooڽͧ6sI~%lOwyS_y͟BSu*3? o.sof=`;,]^ܔGWW9uximHqӿw;ː4;ʰ)%?PnZS$FҐ߹)@9=SfZ`&nCeQbݦk95_n0;W!:>WĺIllNh݆ʀ:CźMt46eiА߹) Vٙ%~R/f_8?9ΕuAݦX(OO@MHY0v)Cbasx]BmP |/Qjh-}tL- 9 _*SKk$ZwjQP7)qqF5_✈q SCfK:P[4/=k_L7ȾAM/Ծ{ {4 _=D@h_ӗ"3!{}P#Rdf:.Nŗ̗i)—7z|)1 b/% p)K=@ ٗ3{֠v.}bfOB89R̞4Sz!LKyu"n['8l^uKOKI\N,]>8SQۇ釆{\_6兟f=:˟.ћ/W2H):T~jO/Loܬ?  hAO!ϐ)RL1%̔9%v*H&0hRp:>gELKk5夌bZ QdJ4^fd\eZ'p µ8 <_3lj Þ:qd!u`wQEsQ~+~ hu+2Rà?#=ۇt{ǟkzېd55Drۆj;tmmM8ǟl8M}@Ww.AiR~W}Z9S2o+z˻:qwsW5 ]CEQMH:m1 \./2$ޱ#};&u- B@ȴI+12GG{њo6>#HsS'!ΐl]H&`靍2JG55gk,#ݔ>fbj넝ECSB/QC4RyxPQbRbZ 3ŴRK3W—*9o.MRɬ<̠Yԗ,vힼR4 ȏ@qgQ:C(a9͎Ki6nv/S_ї"JV>^,|)2T@:ۗ"3#Ŭ/EfF.V@v|)iFQ8/5z3< t1})Cn#H܀WѠBŗ/Ǔͅm1 _^ (\=r U̿ǖQLsԕTʺPi***ϴL*HuR ~>dqP~ ,]T?;݊`ooni߿ lH`n>|XכEmC㯩woQ0cQIP )M*Bԕ@ڋ&/[<\" lś6I#A!>0{'lZ#NdoܚVLf v2p-rI{w |L2] `5yw:v(\4V0xC*uBe} ͱ1J'Tye 2P{k 8Jf>MJnr#4-775?{$aD`ۍ9`5Yj芺DH+qW'"Yn^\=RjO_4ǿvTbo`cM|6ɼ$|_|_m&TZEn2@-*\)mataUywȔ@ \wa r!W`XM{]  gpWSm^7 \m>xU ~9_?D?H仹.7Uh?VVr} s[o՚}bG/N__kQV曰ovn&Z#pbSNxv> F)Gh 5skGYs?n>JRJE>C[J./R*MNR TZۗ u3p `?,"xBuľ'Q9;äTQ2%GگʒReE9RF@CJ".W&]f*h:T)Cjq2!%\fT͜1CPm$4>UW8+ X1b玨39S5LP;˙j3UF9`dB&f'[1.^͸;C#I]Qe]wc} Su},I)9 +ţۢ&!]L+ ɡ`T,;xV ϫP"2V"-D i ѥ㇔ו9eBJ+4*,QmشmE: +mEfO WF<-jlXlm+15ig<15j<,11)P*3+ES e+JERyfڴ"G6M1L\xV wBTҷz`T+ہ;}j,+̜?SFaei:+̜[FRx:%CZ,$N5,wY)^~wYggax-j+JA(bfͷhMkM G'9:KTVH%Y9Ee^-2I2al.,-,}bVϭ{K\/2- RB LIVEQd4hS*?@^""%Ӯ"h`*3 )&${R9 Xp6T}pyS֒k͈S~~ۣM.VR6;zaz7ۋo%[ o.|^6j\/(Nsd7/#mvm^[z _p'jPQe;~MD̼15stk =/A^RY'5Ey',GNeI5\Zg;h_i;\5s("@35j)5,D`6Dlw0`0.k(f0AS)7 +ES"_%1J)fR\v&yJ+OjTN-V*R'T*mH%vwr}U/ul uYkI|\yIWTrk ~_1oPlGVcޠXl)I6BnPL1ĕDi!Al ƿim/kE!ñxUq |$Y$YW!ke$ىdH&޽`c^o,:#W ,OQ$LJ%$-0y k dp5y4S{8ui=Aj;K;7 R\;7vn"-Ch~/mt"a-ޖv\lNl͓#L9駃op7ӓi2F$QV{7fRަ+3(gR?}T?99PkVFp1vn,W8"ZR5R4>*vYW}F\f.B'`A*9%|Uذ+| /VA|U'Q4+?|TS[%̢˓QJN5T{asgjo0ppVJzF#-&Oi2v(fa@t"咿鬔bv;Ԁ?ڝ(w`p:ˆ9F)ԋ"?V}jV($gv/WO=-)k͘`>WjœzlN#H~+8ś(ޅQ&c X ~QaV# @m E2( 2%LEݮC9\zj<){r2X>p 35X8Y?!zX4~f\P\'ݽToKuhGGQ,+[ښ.+WJ/DBJΡm]"[@/I1 aC)&Z_  PQY)I4P$DUWB3r?9Jb֖TS{JbfpPotJz#ƊD Q (,M>$l*>HajˤTm,/x[k/&_ 4UY>ߧYSV͌2t=S&`3e9SP,Vj2+QXJidjT} x:+esf hX/ܳfV7?M۰b[jnL ЎsMj#g}{(,`H9Hɗ#=^v~S,; /g;=v<+}gםyK ;*(LIos'+~~1Ǭ}C^)zNU,T卷BW G(6I)c!O*%6j {`uj^]\C?8C6_nXAF/g~E9{! -id),fq`rq 0gM=#hm `qcTJ*I t2UJNP(SEšn,pTv$s_D\#:maY~Mky uDLeX{h֟:!JA&Q (<ӸE)`=E)hD Zo`I7KqṈq$<珨F\`.:4 Zq0c8* Zs_ 1I3p U^=*ft ؐut`+02 i 9Vy֮а:XeeQ<HyÓ(\|}9>O$w:e{4<5\5ONTk@">`VnZJqZ:ڇYX)`zb&RY[W{F;,卶t@c(+B*NԠWtJL?ЖP$ST9F#pJ۪ _Ng9j{A峸]l sm> ޙiT82r=زl*y4~f0"x 2sdh8d㊈EL)ud\E _7D&0غmH'j`k) vIm;rMY8NT+ѷ9|Q_9ӿڿ{tG绐-4:Q7ؾ}kən_ڻZc!Gh߾@6MCLVнn94>fvcԛ;A׍;rҍmY$uz0TQX&-wC6}/iM;0{yDP^c 3c͚j)첅Š;Cڠ,̋Y:2v r1˨tE>N)fF vd 8fxUSH-<[[Ygaxx[Q,Z))/)$K{eo0tlL$SAy7!v_w8[32`HRSKy KNW U%URɕ0fC1sk9Fh>#h,!FWN-TeR /E!2QT(񈏊GcQzTYح^ ,ЇɹnzL&fko_5N(:uz [cg"yz3[ |\ Q0;I 11s}gu(H1+_wdQn#QD{BK'jUQZZwd" ƅ4EUUT CIIz" (`XS3~GT{7d6JX+/1Ez,^DGY+2"(1ThoUO )8C2'Zv]X`oMFV1k9wcHACc` v*t.{$PZM$DFrcgvo77!24Ǫy~WimKe,V͗W}s;c ο;]/tMO]}0+ABiIk,o&oxqsJ/ߞ:.(Xg> (@X{K!!Rޝ]ͷ[C:"]k ή뗫+zW^S#I"Jc}n-ߒϓ߱_zާ^Em\*0S_$-/n2fO5OO.ŧi2DB'&u&:2-NF-$;ROۅSm<`uQHζ0i#C;?zĻL\U>c AFܫ7gMo7f痗+rٷz/^M|ʿI/7y~4uM|lݴsnCyQbNE8#dmx{9Ӻ!*[U1:X|u14!P*ts0(p-+YMԈI_=44][z1ϸ׏^&d?%۱ϡqp s,f0=pM5j̀M6qȸZ:ub2bzh:u(ݧ^)$ $+hW|׀'[:XN& JRj[8CApU :h@[WVQ?6S[_pc"ry[ ^c{X'+/]Y(Jj[GO8ӊ5q|QY{qD{Y4u%Wƾn,p(lJ!5'_jQpq/{Uxe=1a(TV`5t.;BUm![ѳ;ٽ 6QZx1د`3G{-|J(B :i($AZ ^*و>KTOeRmV6)5Q}G5zRXüC2jpyGCMxz:|RʍuTx/ A3(4`` XxPt+t^N z}KJl9TfGC|6ɐgW3koH `9bMz8=$.)ީ0Nto5㷟O:&9b'Q}WN<RR*Xa%ڜ')t_Pb' ^q nPy)Q"Yk)OLs8 >[aYB]`2`U^I-m(3|w9'WI[@hl^_>xZTc*Gݣ/j'.sEPqLsH|}A3' 4<ׁU ~A+0X/t&pf%@zJ@Ojk`9uOS}GU9ӷL_ )_ӄMji/vÄdOyi3 ۯ6m= larX~NeʕVL K@w8X &\V'F͟8gɒLJHU6sHPye,v8x@>!Rič]5QbGG2o2̄I8CB2O4 Rm@@ŁՓ`u|9muU؃d Ԩjg.Sb)"R!1eǾOLgŗR`|FUwю-_-QL"=볶 rO8ED%&K)ܽj8_*_ P N+դH DOS}y@nֺj88h--z*/}}AwH_*MNI%҄TA5;83{RJ\198'Gjkg_z־TA*(rPL—*҂jܜ}m~[WWQb>%&^ĀI";9w,p*}Ks\Sx cRR+!yf *?Zrg\VK3;C)N9lO}l_};' 0n~γ VwyJs];5#Dӹ|BҐiKpAA - 0]:eMCAmAYU00faY''}$<< }6ȵ$cF?axyKpj /Բ4G]`)X?YxH[}rD%)qַ.gPɂ# XkSfWD#יjJc/(26-.?iX'b]k֢:;}\Do"z[Vv QjD$&,6O\: Ic"-LZDGjw? ^IMh;O?z_@6kr;+F5M1b> Z$^ӎv:f;u2bM s#b(%Ep:Q #5D&RC& qS SuY`Cȏ ĺ?Z36,B# ڮ\$!f-\BP%JF&„<'ZjfH9Z"b~^4EfA5Ev(qϭ֬lCJhq~ [E>HS׆kXZ7o ~R]~)ȗ-b}E~J/,% /.x%7EͻcXVF}Ň7 I"y7_mWs!{>pJny OؤjP(_id>'i_~bKJZg݋gxo%$@`+j% ؾ#űg䳁$l8g]buX! \D\ey8_xI@Gʜa1˒RHVɖk\-QաWo]ݔ??1_oo?FZ[Z]Izp)')~Jռ墉=g|R .}_Z9LiVϏ\v6-^'<ܗ|Nj1Da{ jτQޘXIjĵY%EvWHˌ۶ߎhpBuXH\Cͩ@FAcm?; ;w\#Æ9~So8W&|qEnq{_ߛ ٰݫݏM(O \"|^Dvm_%6Q-i W6:U X7p ֭-BTuxA'̺ n]h W6:ő,)X<QcԱn#np)U֭}@օp) D%QID "Tz.`IkC]<]S|mUOwtLE <&\D­T;aΏ촧QTKK,6U(+5 sUOS`J4qdg{4fQbSo v7EV{Zqas`嵤%UP?`zl}R/{2-q2:*c!Q4A#5 &[x =xԐQk<( Fkzy*& 84H}G؎c6gxu lZͩFtSS1G5實G)aNN`CQĴ67)cG'(c^''WIPNA\c'~j äԃmFs1 CP5!d} Ѿyx!`4*ZFmu>W R~ˋ:dU>jovPUqSu˟VA;)fbW?>;`*vU3ڽ(:Yf3v34G'^7&e_oTVeIdX+RuEXy2+ٞxǗgZ+]:}j BǴ\b1GO/D-sgw7:è5'̹?>ۼXsӯb=T5Bv'  褤vmy󧂵! liD1`Y02 _h(\H Rf79壅rk [:$0<ĈQT\IJXFTfd4B!s O%w9:Ui/K{8B4M~?.i姻#>ٕ}t Q~]~z{KFۻ\k\W4h_WrtI8mn螮l\WԾoQMG}TRu)gPNUKW\|uK-jubka6z- *{N#~L(;2u/k鄚e`aQ_& E]Л(R}|\]24' mĨuRe>>![AH' A GNǑAQxYDSGDS^c4oצh73_X 2#JŌo}hT,xu?T,BԸ:F0ARS+X4.`s월 3Sa ޺d36v%Ir߅Bn56Oq !1/=iT43I!5-&uNZ^YꝴΘU'mfp0N}+,6Ԯ>&_o1(63Gjj n9SL.UOT{s>o~p嘰C0^٠6p aA/qzT^CG,I#Sgg`2^}"j]"5!L`9m' Y\ґs SIŜ5K`*+U "E1 KvcUP!vꉲ[#sj"6\]껒,ªlaYt2f CλskTSƺ):i`q3 Hkcju8aM'PXڍߔ9}"LnƚdԸm7&zXn1JN3굅VڍMX#aO깵k8ְYnCK=8[Zյ`O0m'k2poc%8@M,Sw) cAmGVʇ"hV)*r;Ox1^d0 XقQ%2]E z{fAZҠnwzY():uQz,BQ1KX X `)/Ԩ_XjJBUh,BYkL 6#T9CEDܑ1\ist)CguTcfP_FOȭ$ R} J$Ԩm;j< "͂0sSv)i4K ԨٌB3 2AQ5jK(q:2JSsM9q۝zY2,RFA炙>8}UA&d)YI+-i.̂d%ɩt,%+dew,bFX,h(d4Ӵ,X LjYXj21MI$gRBN[RSBVqgR 1[Ԉac)Bh7yFըQ KMRFjKa\FM-X oںV͡9dW [a͡-ZVjuh'j]Ws]Pup͡ ~$w?A8'';I{j8Yn;a2=|g&9KwY~-D̼aǤVаJyN1>ᵼ;fu}j -V\,n^Tz*j^?uvr1UVQoݛbjaOqx-hV Co֫Wgu5kV/a(*+Pk{nI_8km#7wOނs4=̛)\xTER5G<{g+|_J?߷s *$E̫K(H vsO.eBF^GLVrgC= ϚRyy]DΆ8#El6}Twg::uY ~~P9t3N[aU-j_INw)osf!IrDJ* ]UBX>Z(Gj5J}34Vw(y0GMU~.)rm ³Dbgwe^>U߬+Uj3ŹG$"6a(^Aꛝ՗"t Q\߬yz[LFf74ۉkSlҥY'~_tS/Zl .؜P"ꛁZbÿbꛍ؁UGEe+տ?緗7ΕblCBd;eUw3CU_FU,oUHْVݿFQ/pc:jf:aڍaj5}uLųnK5Ĕ%v:H4@ */]g]^3E-S:' ƠLA"]IB԰ϡ5A9бDOلm ehB()bi~UaQΚU@P)?fQ_&PK!"N~ CQ/Q%0e$LW`)2Q#|%ѥR4YJς:i_MR:tJ:LY2M)Ԕ:BqN> 7Q0Vَ,`;M ^fXu<URog Ѫͭ9M)xC'[;R>z|&|TPC~euO$ $P.#,xjOAaHh{xxEJhe776t0!PF,l9}At[>BjIVv|8t wC hS bi͊rMhcf&kr^lyJ;]KKw`{Y8SPkhWF%7oBkT`ϑD =a*+ PJ]d3v`Ed\x-&vCݾvsj.y==–GXGpN#|Ss>x; w v<,jYMflO,/FhZݲh銾?Uݚ ?5UѨ:o}GR7ZZS[S`?Aj=nOwnO]AVu_yB*iwTᗍjnJm,Wc"`CbR6d2g;4fu}: U8Qezj}N=` r?A|؀0Z OI䰺U7aWB̽C)͓KT|4sRe(,2 5:ӽeDZ_g9+t L!L< ,R[DXꜝX*ٻmdWݣ/S$H> &xFw;"%Vo%R9VSŪbHO3A 2D`9t*C&}ެҐ|W°. m ]\?[R1?wm0S+vco~h ZWU>0“f+7fԖ3K<^|zW.XJBa RYܦ7ߑRi*2f=dJi"do stV׾"8,mO #)h}Jƃ9 yY8g!Ƙ8\^h+'&.6p_Tsm_Ȼg/-oM+PK8&+,{d)6,SFXr1(SyFa@@ӜZ"-ń0U&ibN{z)!F v\Q[;Q6 Out'Ms8:+}0"[S00ɅJh(=`엑%Q@X<^1 3:/FB7aƼDFqFP Sn12F;wKd[Quy3RRLkR~dxFZJ WzRD&I޽};xQuYK 9"eZ6.4f} K# G㥃>| ~.sM f,m=@f/::::M *_SP(;XSrT P|1uUOov>܊&Z&|O, 礸3GCT.M\jϛT}qj61j_<D $jHUÇ U_} Fǧ`TYK LFVOOnbDdQHq')vVQс'PFS2ya5&mZq` Q\_jK̡">,a|I8퍻;p|,@e"š$?goo}b 0tMս HB.Nsd6H'^sj:Úo^ed]\Bt }G6Mw<1$EW-]Oiʥe 0[ N;Zj`%dIɊ4^(0vo\G1qvM"T2Ibwτ&߅=&bFAFirM?NtB5 98\ّkʀN@\ֻ+p` b?^_~[gwY~͖4KAW|H@CLB/?|.jl'ak%&1&!uC>J֍R:!u/[4.Kt}*tIIK5z?*ۮ;XE-eiE2*I<ߥRd\Z[,KsPpifSx<㙧YZQIܺTwcmH}and {H {a;L"-856:LSɫYu،t,2չV'Hng ݜcfpƤ},7<R*iS-qybIF$uyA5,9GKfļn)54W'P;^A4>5n5fgVۯr=~.Ui3սA[dzgL:(=R(.)ٴ"c4#i^,IJSEyeFws4jG-m4m$x3amLHsk(iaxIhx^a[+ GK`y Gi 9 U+#rPf/ÿ "q:'Cn&ft :S%tAևh{z)372ЊW_Y؟| ;& 3{O|zZ y\%Cҩ7]RӾFYNH{aHcKXHk5ZEC%A'(F^O^D8R|QNBCo =maj<Opn݀O.P1#%ă\u[OLuYfC=|nxz[w9h&7동z2 aMd`,<lf/`>MlSfWX>t89tyШ`<>CKGrĒ3X(|fpD*;KhF^s&|Ȕz&sBm:?CIo Uhѯu$>oł%iRC-lO瑷n5V D+s X|- Èǥ T&q~h GfdId΋40p\ԸqFL ;,;o lW Ҹ"5U.ؕD:z^'zzNĈN[r+( tt\)rԒ Rb55iܻ!\p.[/A=./ZuX*^2S`XCy'IM!3)M"ZL$Ţ׌eĞ0-(2/)0zHH_,jGjfgi;XGP:S`LFA0%zcR_HmGsKgRDJf/b.=&ՎFͤwSJ JK*Ed:L }Zn!gLD_\ңJjN<蹔#+B8_PNzzi¼*W$*B"E@V/,CDHR <RK=oKkr,+2Pd 9^0N2 򧹀%J;m T mΑsLM)E-9T\ϫ)ex-#gRj!Lp6-ׁi~y1vnV5yK[KE8L`w"ϥ饕M y..=M|\Ϛ8|RB#D$^ƙDR ŜMQ$jTm 2Ҳ/. ΠFvEmeG%X:̂Q21+F'bV yd آ{Fh8 kpDF78jjWj:2<c.  *':Ch"4sd|X͌8&2EI2L@bn`f!w0R S kxn*sVl3+ *F2=Iȋ^]Z\,% VdʵI0xs8LeiƩ3ТHb%H*OC9 ??^0De" U*B!qpsށɺ5@twf?tpw0uiǥjK͉se L\k}i.„iQbDE>98:UFumCoEOX%Rur)萪@GΙd}gn9gag"UnS{ȗ[`f*u6uy֖ A`4lt KjXXJ( -DE]cA#6t͗'+d-C(쒬<:0 !ѧL"v+9ў_>4in>y 0ё{F/wUݍ~mcɎyA8i|rJ`dբn=o V ρjiSnB2wBJmHCjTځY qn,|i/H3)DJhQ3,ԋo˸|)zݳn%ԗQZ۔ Msi5 d yRY)B¦"K/P3.si)^gd_OKŗNKAxEZi5 '4o߾)B ZVhrZNڭ!a' O;=:uH2`[{_Չ GAv?_| "z{[_ǦV颾}qlׯ@?M7]Wg>?kvaUܯ5 OM4N!\ G;ha6wM^1/mQFyEYDsJʞ㥚IsF]ILPtLSAsdYlC0y;-W*nhNJl\+[:&r,TnҬ(0WX[uiQL Iϭ;c#%!ר]{,K[SIQLT'"?@Z`DKKOTlUyR*T=;+JYlv8ʀk #^[Kƒr.]"s퍘~4}꥞c^o@s2Es3/}E;ۏdweF7c$"mg|$BP 7Ie!qzȝ:x QN<,3kʢŔ Q(z(GiqۺS.>w2AxO5=3;ic*#TKYUp.CDcݫ׿7ͧ_'ۻssuвQKf!r3K z R"v%bqЂ4ʔ3({jTե)KĴPt+iN5!:N;tvvB-m~#*;TKEgKAX"ۢv[OKQXgᑛNCfr5je%d%<1;3V`si<3WMqi.>Ca 5^/KEvTie@n͵ژ0ZPh킍NȡwP5aQgA-TSRP<#B3 J185G["$(Eej*!m]m:*΃Wui[b1鼸3k,qEZr0mLveIw\;*򾻿ͫ?E`߽.j~|WUY_]懠n{NQ_p bH0n6K"v>LB^6H j$tjI~# Mj3ۼ ̖b*pXfM1uҹ(js\F8yfW\Q`O%fD}Q;u K`aQ_v.鎎PX0Ku8~kJγC 汄p 7r {N5dݔv/E-m]6 (T=YGiiT-w{ZTYؽ^IkrJ{1wu3,d:A|tKu=vOHAC 6A}D9:l힀27 fh ` Q_NDKn\/㲘2|yO;c_qg'5//'ceWF]4ᐆz?fyTz~oG6YMmmP_fQQPZzW#*  C?ϸښS_(C?ǦbCF|-Y7߼X_U6Z+3czT \'*D$IMFg߈ + ]BsE9XA@!pMN.z@ GP^;~%\Sr;M`&mN~nQn&$+wژ55lgVD*wJP'gb=~֏FO)GJ{6(Iv9x=ﴷ:ʒm+݌P ?ڗQ_FjI-=/9zGq3Gm 9xt;$'l LF>DuY>AP)\.ӛ.˲leYr c.f&򤋥OeLEcC7?c) *MCef{YNӹz> &rnaj0Xי0|UeqsPL/ЕqPN87Kr Q_FtKyWja&[oYia_3y>-4mS,8~qTOavR/TBSZ}gKh)6f6͵|)xF57לYRBjnE })  I[s`w5Y-tUEnE+c(jRZ bsȦf2=Ԇ6l+ Ӽ-S\9eL)ؠ&E yǔ$ϴth~fq!2 afySHnP%7/%>PgKIOiQFx~ڲ_CKeaớΗa}z)Y8_2];/=/5\KP7`)鹕QG?ȍl܊|ZOԂ@誸(?HC^O^,}l\Q"rgLS?bgN6W&tXd:7pXd:ՠfKSBj78z.jRRMi@i ,Nr3B zBkQ;-J5L9N,)N5J{.VpG, w| ʼMijgm:2BנfПb!<MC+nBc!t6g7tO' <R;ViO0t?`4֫a۽Gw2>~d":Zi~/oqF.zamU?0=n!|Z&(s1渴6GxL+as 88{sv6?va9= ~M6ۤo%sB4'-<Pyk,e]Meԗ6t%2J)e㏫5̩%[F4ڿM>kRzx -&gS_yV>}}t N(k Y)+{O}cͺU~\}?jd{V+HX9xū^_Uf}j7o&~37~sUj_/>]E]OѦ2*zDx:wqnCKU%W/74.E`wm.5Qtb1^iE4#'zI=~ބ <@S~0q jPL@hgfa@hyOJ0ǎ"D )}mt4BW:ϔnNTv߮MLRaծBƪ< *#۠Fmt޺tMm+ u) m+PxVXS hXɬyaI:)B)RƬȃ%u}w}U^g~ݯn:)r'4m1\%lNnk `l'w*i<٣C*]`]Ź?ң!Wd*,j1 Jmfȝ547x,yZRG@pTbMTZwkS[=+eyOa;݄!wYsa='SqwjHґ~3'rEq.i.VRN5kgbe>ՀQ鷡c8|u8w/ 4qc꿵z UeF̙3W;kp'I^-ՠW`)y >]:߿+od9eeep>Y?A3Z80Abj͌a:}i\Y?ey+X$#,3Jw:rߑCcOZh ˚?7]7]n!>SͫPԺTTiTꚼ Fkjǡrgc9Z*ı)} P0c "}^a09DN9pͩK.+ (L|ujvrjMNlMAFiIy^KYPxc Uy;9XB'TU eF= U9/֢kqH8XCt+[ Ѯa(ٻq#Wr%;Y'd2iY_ˢX4Ƙ, Lڍ>,ZqQoOPȘMlb7Bqn9"RT bBRe{(Quy#)t{F*ڤg2B=,R)eU ~'[p,;[JYt1{jsdxKhrH/Ԩ'~id&sR_u9a?= hW>s_J{p`3_Lu=;HuFgItD#>k@^ h@K݊/@`~/שFNc6:@a d5P] ^+AqȕD!:;茯|?v_!>Oו-cG_t#uϹ _Qz|{q~Xj\;a9wiW#0I[K?˭_-qM|١ 0!Gfwc6ZArWggnw&%n;/̲%װ3õo?=}yr=]|NJ~7ZL!]/p*mQ?yE嗳_@P`ԱE S|?ݹ(o'cX?ҮXkEPBkN^?9 ^-qgOVK[GSQ^m#е Ν-Q_~a%%y/B_AT6}HT(|Sm@sP8\e#_orQ4Wa9LmPՆ چ o `d~-COwŏw._Բz{w+Is@jYl/g0z@~~Aŭ~|0:y*keW _xV{<ƫ <+ YA5V.Z7.I2U.3szCQ[*!֟=;v PnL}Zs{?Du-&x|8f,s0,)<[[m|p>ι#8ʩ l%Cw#CO MeU\o$#&75 ֓;5X0=+)<^W4nl[FLf}sķ7:hsJW8G'o( SQE({= #Ϲ5xsN=/ZAHJ7bR;R}YvkCB޸ eXxY7y,"fUjFaY,j),~N$ cgz~Ht*џ9z G r~>8?X^ |+ތhǝH̕^U2nR@^qJBU#[tVY|"W|D걠/ؖ\2bCn}{L4&6r>/7VN;㧈B\*0yGaO (8r@<RuZ׮x2D1y[#[[+ EALPӂ/VSb"AK BF{̒ |[d]6{x2d;1`ȁ={/72;̈2%YՈ]PITE5TgZJ&NK_DoPHpRR&+tO"[>|8tt=-g>olߝH 2?\|[FO:ZgYک(<sNJ A󧔛O^}u_ `1z't*!%la 7m!-I,2%/a Kj)-Q̣='1E-}՝R S)I{q(J'm'ޏ6wkou.#]` vxOS#^ f]a9#H{3\-yfL(i{jMc'v ,!\X޸|yI͕软޹GncUk";6wuk?Yf.MYʡڕ&Dr W6嗱:K"vTF#Qs?ΐo?bN0#X g7.8l-z>2UBiNۂc!U6u439nЬU߇|Z1&>d 2nO= `¨٪~m rRJx)ۚuꪪ 5'޴!AmQ oc8YW`(rդ7 Rif9I8--BJbYG\KWRjNQt(yABL9u">"GsD! !52j9!T[)|!96VKmrjkJhJh2Y *%şIp PD6!D"HךID?:YK"$)6&ҪY ??]!!25jod8H`l&͎$5 3-3T] Z[bծ;JR' ˩rVXŲrࢄjg`"O.]M-ς;IQX)޸5N&+ud+ϩ+8Zdܧ+ .:N!\fUwvaSP]Owr8:Go%ٓaiJ4Zyq&FbhLSµ0RxSEpf5?-( f*3jF'3l53^f M dNFfq3bM;@E<qaKCZV HfG'9=+ %'ġ5,Re2aUB$RNLR@=emD 8 VueMA(r+ J@Vh mIVۈQG؅aUS8¸(+g}e[s6|dOpwe˒A6i/pjJƪ2/?*Tnc LYݾ|#/???+.wH~/;~gX=gW!)hP 'K~1v~8D/\ VA^qaG {Ks?=᭰&}HVIضߒ~269<˸դߔט8UYq#}1b2V `N\H`)Nƍ z#,w՚>n=FFxq=nu1wHEFFJYaDF}YVL9)PҼ rF:xs ŕ;Ƹ~!x{Hw "H36Ǚ%( C)yQ|ּ@bjT@5!>d5/Pء 9 X[L ʠ=\ eK"dP5w} **Қgj<;Ҳci)6ci HנDU'Z[j!pQmx~u: AZ02 ڡ9,$1g>䩞eW-9Ź#Qc~0aC~y9Y8;w+"B:]~^Y,A= A4`MB#UIM+cUM< T98YF(9| jިFb?՜w4j͓h$>sDi\KJ<ç7'ħ<@͌.J*UCN)80L2s Q;R 1U[Z)t2E¹s vkT;ltL69;S ި5byFo5:#)X6F\, 84D>IF\hUfEWit҅6@lwb2v_Sl QTXozRqaQP5oz*uVF  6=[`SpJ9NnS)PZ{gBa 6jzbޛxBz Q)4Ά83YתRiN!\1bۗ:Dcd#w46X2Bֆ(F-E!xbu;.W̺ucZ>4Ou7z>̹3Dy%0hƁJL׊ԥĬT$[q`QzMDIS`9FCQ(24 +u6`V #u1>q~cv5dVNoG~k$`DCKa;T;= -S@OaTKҐn$O@FvVՋmezRǶR{XyB+5 J!1uܐRHL*7J f$JqBjz({;ڈLѣmGk&48&eIFz~mz,Q ŸJtFyST<Ϟcj2 ` J`"ԥ̥S1KT~XE&0ɄyΘ|,5:A樭q)"u.MIF¦IV(#Ԉ\4q“j䣜 u.%,nlfpY.P,s:7P*4J 68f<& `t\ cS8+^*mV-vfg{= ӝzM5@'w(+ ]_ 6&/lsg9}X{&GUWPzb(^F5}gQT,ME+&sTF {,܈V!xP=.JwOn+鼺N3b-$o@azQϰkn@0H -G')PUU!,3Z=#E&/<.,{!eâ4sei=T/vL%cd eF.C&vNeC3ĆȌ0Ջm aq$0ƞ})0TES+L^c٨K*naR4LRqaT ɗK15l#5}):Fq6YpїJ/+irԅJ lpiR`In-8/,(sʪ,KSr*[9ryN&5|j>-! c?%3BbZj~އ-bj&j3^ f 6: th$V2!$=s*(aM1n 8G 7rԳ :љΥeVH2 dEM JA)ȬE҅Z-uFS$ӯŽǤ/]>+ޚk%c~LJ{9'r >~Ňϫ[_G?_\ńm%6Hr Y˖俟&9P3RqxxLfgyqOOX߁kAjгh{_!8 ZV`TKlQ|6$St7Ը_6bZh v&OV_;OѷM5rC<֒L_yn\Fdsd%qɂ}W+k7rM:[u\lK&f]z/\߾ F~zң@"%'Yh2j+U)(>-Ϫq}֯j48Ya9:=/A/dd,; ^ Q&r8OZڃY=d)0׿buX+zjɝ}Opb;|KbkV.<=VW %׬]JDcCj@f1QΏ08=s9Fclll9AۣMf<${P^%?y RN8&X5-E;͊d?_P3Y#YAiUG$ҐT@I *eBb[-Ԫ^~Q/S}ۥ ?j( 7H}q$Qip 'f!0 6p$d{HH* ,҆aKATM5:}F[ B-.–PK5T;-j[#[FP#l _m錶*lYXs)*]ihj[g**>C֣ţimp@,󩮢i0 wT(W(XF4շ]ѬU{${TkoMPTW+  Tg"l) 5pl) 3\cùXp#Rgj=ohKYxeqԤ,⮩&Rv3R9X-eaFTFՖdKlF+P0_$k':Fx>ez[5Պ|&['0/`?5|K O &I]S76 &XL5DX `V=ҧ;up^rhqF\z1 ރw7ls`s*ߊ_W%wۻgE5nI\w:^BTvF:krSZ.#FȍmfN8YR=ISgH3o}My}gPd2շ]E&Zu"RLPE7ޙE8&jۈ۫#q/àPE-gT;B iKA6%m)5k#|D5-y OTE5AxhKQGjBٝ R`2TEeY8Pyi|Vփ!Ԇ%YO( 3tXl_, a4Mu\^@hھL2iZAC~jEJEJF{UY^PY ܧ,BEU,ua]% oh.e,`z`1tںE, n Yg3XVXl뱐ՏKzgT56K*tZ /vTQ+|ZJwmwx" zG5$SK8"LT#x 3j)Y6t RjT"ͩPxG2nZ g +%V-5B/ǃ"ض깧iwsu>x <#5̤~S~6E0D6be.ꓗgco  6 NQUKjoRT;nZ0$~̕Bji{}MB_IA&c/^x0-$IM$) 4V n ~uA)51f! YwUld!:(xfZ?nRE$+nۛ߅x?m?i^kƍw2YH&o4Z~IJ<S$ |\fS`6*B,MCRӃh4o:t!-22LuϕegLӔ|A:]nVQDU6쇬wh7Scck+6lUXs_޸zo@~􇇢>UY߀(LnUi݉O?F%{oj%?` O?FSJY>V&5y)>رo"'dh /6AЯ=P?sT1,>p,̪6qDRL\m" hj֩t&e(gn#I^dO?&?o/8gۺ8~yn)vRk'꺀*gXZGxc5jc9ґ*:^ܡuM<;'`_amX[7ZGʣf&>q`c'ƨH9<_;FǹOu׳QCT߶=%rFً/LAPkq:>rE~VW邷v|)/66c$Ƿ(ڞM+*]导wmes\戙12!SicJC[+Zg2eN<)<{Rxmemfӛ/z{S5S>=Te1zFۢs_7Q^Z[w·hA}|L=3omć3|tٌ2WčCW39搗ZqrYpK mPzhQpjo_ңt8~q0; X0VD{b/'hZ5_W3ZZ|+.+g"}ԏ,?s>˫8("ݬ ib醌QH'݆zkK!-0ZWŌ d"CnZF3=t|6D̾tFA㩇H"X ֛v:5x^vƙPHBl$$YU^'넃/uU1q{"|ZBw{ ;QPwבE$3r`Sx8 LRSrR86.HN9A-VH-yzo,ړ $)@k`v1,:B׏uixP_8Z%~yBlL˼jפPO~[je^~Ԛ.k/wUB~W ][~4.LLb*Joy.yPOy4£*~/5goamHu`4R[( ƚ``67 6#ŘeEOd0P@/?ޚ!7_5k$ ;RƔTJ9PZAﴶAp/+6BR,-kɔE-d%|_k]*\-L+p"}&'Iv/6tΕ92J rUA7@{Zbr6\ H!JCU< ;N)qDɛXVJ֖SRrQIZ`-+Qpx0(i- 뺮ho_H |sٌM[@7ь4g( ]+Aha Yy 熮:q&x .1lJ xE),Y0cѡ5ل6u\6j2Y:җa0MeM阱Q*ve/#E ohc4C~&L[f-g=lpf0Z,# hd*ufIŖ`3G*WԙiQ}+.}j%̴y,ŻVp b{f$69h㌩.!*rt!N d\Jj,$n-kQ)NMGp'@B-k\_+J%2ig2KuD;IE<RP8@畢 m˪q][)D4h$Zhmχ;ݵ5x݊RZ6 â5 X#:,҄z{d"y%0afÉ^(URX%h'vX̡퍝5N\$6҃E^Ey-iMRH+Q ,X*#]-+UF.kiyx ,Rˏ6e >A`yEU}Xؔಣ3k**+- JQ:JBŠ,1O|]Ui,!82NϬ 9n'{"Áb-Ip/Pg"'\Ͼ#.NG~c P"K `(A1@{3qS a;+92P Rÿ|XJ#n˶>S@33䮏~fT'!c>c;'"$#T$lu/u8&;ُ)h_@\d}/5]?b(-_/>klYLSH$L){ %2|N|5Y_FtU&Y,WqZX b2/Gi}766_>ĿP6s-_>\^JڡbFO#]Qܬ&uZ5n#?M5i%犃7̓qp; =:h%@Ćp6 Y Bև842j( T o#?/*uŌfҖ`#)$[ӳf^=vX/bsLUkRpASۓͫIq{9g_1%Ŧ1ĉlZ+S/#[ NhI%Oցi ) Ŭ{k|Cm=Q/c]OXlM۩=[>|gӫ}Z[i"UٲWޮe !` ǯëOnp60'mz6%4q[= :";L8-FCoɨէ{~2Bݥk)1h=Io-H@tտN8,j8: {ǡ=;=gChFLyq0$j3s"^3e'|G+k:ݱD][HEv`!?rƶʻ5xѻ trƻ]K{nS[hMIvl##&x\N;xsfͻHMn]XnY6rq[Pm~jG{Z?Olvyĭv|% ͖'ْμE#ueԒji}A%K~?7~쥫Q;b2](f_\DK亨\RȩOpR'З^" Ȯ|WHx̅1_mxu w*]w|Ä~y1GȦHdc^='2LʪTP(bYQ [j Qo=%Q#@WhZU҈n-K`WWPX~˵"ˤ71]Ljb2&^E6$BbILM7܇yY6 `B{ff/beZۼ CiEĝ] du J_*Đ.*QIaq]FbnD.#\~8p$5S _ .C !E2ryk; ޖe\Y:ޜ-FYZR‹ٕh,()wp+  ^HMz"3ռ3OfP෤:Xu7nZYw1hXH8%F+;raUB0L)x=^?♍,E3T;մVI\i5d2mKZE!?\c .H@ Ń:T  csޛR} TYjoQm4mS*]jM oJC.QAƔ`;c{¢2\bx Ď.SJk;U@rV~\EQߔ 20LB֮,XAjM$'#>lxa.I94.d Ì1km4{oO'G\CX}p,OJYb e!+U/3) UY}Pym3f7=]6y1sO_2]Aem(0V5hY<,'X&Z _f2!^"M"f,R]ea\.εw/tH[,)KRj?rtZ#ūLOyу,"B  hg?~` `9;#HOd|3 @Q)brPE~3kDΛ=~cb/ ѷjtN3 _~1a S*p(ŃN-LմSFd+9^K,DWlHDaPc+eD<=MY:.?^&t5تJ9c- +On»7]Dz'pSZV2U xM{⚕ [z&rhG๕sx?'+{yt>_VuOp|dM8 kmHIvR_o 8qgNmjbo(YD Q)% ċO9c54iXMBSBS Q S5 Ή :h jKSf$E `5<<\AeV] ĸN$Mƣz.gNʶ.v(% #ZFxqYJ2WH:?Of홓)|xCxjh\X֡`Y;\C|{ɺ[̈PÙ4DFl S:8FW}U`[tHӷ2xO>} ]&V;]?ӁQ(aF95rk.<ëO]H5WcٻBNh!U$+q&nm@j#ꉶxd덝9,{ 9Iɖ".ƽҎ+l:p Wjz"wiR(21`|)ĪI$q Aޡ{s/Wj aa-IfD!`0I؁))$b5@`-i  'OvXk@m;q[HD.qOA2"rG>$ɬS40RbZ(L]_qA&i4v%ذr~J ٱS2F4R!*k3Zg)W1-V$?"] IMph-Ufeu] Ɠ~%nW3J]M|I+(1-%kқqR3M"3TvMl2kI+J莉Ԥ*fk%>Ǐ~D=Q6bB&փ%&:R4{g-j&; ঌP]-N- '!ƁrT8 E(%2Ûo_%EqBRC!b0[fx T; c.r9"sgtݲ%Q9?n~iV\ ;XL[dHbH&>7v/X+ݕ˜B6?n^gf Y7 fl2epc>"N?p~*af/<Vwcluc:LјcXa{Y* e[²Y9A2)9qIMj?rMj R6a~ނey9eɁE~xߛ瓐pN_a'gcu1EK9DJYGhXIt>8qu0Z8AF,=G%+Erilw絚t> G՜ʟu'B.(W=7ަYMã7|D7e,uĎdHqҧ,Ep"*/RZ)!|X; X+eb J=:?T"ެn|uҍƧta`->Jx%pt a"~<!\*GDuku՚nM1eqouTꕲELUA=׫Yo\/*sr zLRҥF%7aGo$'r3d.^}1!L?U?穘˷M*3Ӄ-*L0;MN2<݅ G 6)f=^a4n8fISˈf-#eO-#v2b{](~a w5 _G+m,gb^-P|DA- )'M޼ O:c3"jcJ&ƹ hջւIW~ ɸ+&Y>&^3orNu^Ofc'坑8  O'5%K7e^|t= GB#v:[ hJ_3,W+qeO!hZO75{BPW(_tb}i}Fq[v:%V}$O瀋X<9m+anH~.5OzJLhf6 ыR ;S=r8ڬ{YLUzh2-1D$N5t^ !}vO1C '\;p3F5M\^Hli&Auu+ FMf/J(E:2_<VXR%s5ɮ 6BFNyNpb¨[N2 „Wy9F,V9(V|z|b!d)0Ilgw_&R@"E-;i}4*G }NV{J% h9h47dRZoOk> JZyeqQCbޒj/ .k T)e'^NR_'@>F Zwz~}Dž1Ζ*y3L)$ɋp`LMg+p!4`\~yM3x)W~Vv$_|>%"K ʨg2-3+VK^"ai{5IĤD(4ʄ)&]a8$5KH X."ZFAWy7If&A `g<J'-I$B >Z`A[Zhؽ$n_!1 Qf{n$LPbspUF>0ŵs⧧ڜFb.ZR[dw^q@w1>H{2H0h⩷<ȻvzGV/mIG,l2$psÑ*B&ˆZ:S=k/nyEGNȔ(? ba%8igl >֮a8L fqz\GUC(2&%望Ye Y-b(r+>5ɫYusRecoOSX(f.%0 XȻٛ?TUq0zoFau[|ڏ(ZyG-1rĴߣR1GRiM6{ wmi% ̞]}4G>榝3#W'ٚ,cl/!Ыuw5KF WZN: @Rle8K4%S Qа kAbMkfRv5JL@NϻD5%3F %:Dv҆ά!*救Kcz *|zy[T۹n;ϽIjTVS[XJƬah"J\E/1Ρ;_dvV(YDםnް$}TޢZebo5)HkYi<\_r(Slc5np*o1AK{^0bYVx΄7pfJop@:!KԌH"rDTЂg< (q~p(SZcDm=qp>JU06NbP)ECZ3* 5G\7aуhX·[EdE5jW|L?,V'ms|w?op:D'~,rr}SD FGdF4Y:mʞ*Uԩxc-K,c7ΚiReIز %X}uU L7l% 1y"UĖu5ƤxΰFe%jf[wnjaYE#sWKDS6B #1FؒkE*5Rf!_2KW'z9*I04BG(FyrQ 1JZբފhR6שׯ/E&Y^"lI+ HS}8O*wEa6Szrq:\$JKr41 Jc  Ðu1Y0x΅^߆]=GO9&h$s$Kwt~0"[Uq؝8՚]2:e\iZ`]] ҄edT=a醰t LUs1K*P樀{/N +O[%,)/oj4vz {jJ0L${*_Y nl/ Yd[fVTn6]}F6rHf;&̺G߁rgڒWY㓨\) tC"TgN8Y<֪%n"t|y? ͥȪvx8'@ 1 3RkZzA L'~7rż2E\@-WhD 0%v|9nRdV+FsU2f6f |kBUpvH+ !ACH0;oYLRQ-]uw͗9mSGkn)\NԌjY~n: (rR9 wZu>2c9m1*TXV(QrT`1& _NNYgt=ݎ4(Ɉ,I;o^q-1Bu^|5 !4]4!u<2#~)^r͎KnP) Cbb^:o(&<2sxvs L駮F뒸ZLig+Ks~\#Ƃϝs2ndn#uK3*vJf3kbesF|κOdayT 0͵r4Eԩrn) 1묑.f jPyzrBf&`] >eOl&9̞%&XF`puk()˚\" qޕkn*z>i ݍчilU0isU( 'FSG,= 35l JAze2+ [kyh'YW3mD% v͊:9'2 L3qWcZ `4c@-Ͱ({P5(Ξlm؇qW-分uVgRTsv+G-ѻ3bxFl9HAwhاBA!^C#_ ZSmi/+w/YxͻerׅZ hx+ynʆHfJoM,4ԨBE~@if smJ:+4"({l5UNk븳鐄 F #tX B0 -_ͩ=SU4Fg݋Cb<@Ӵbwnvfi>S$I G ;Eð]Ni5hEo~m^G4>-\w8\=k x `\T*2lM ʐ - [婬E˵5d^FW!TRq 2޸p&H<ߦ0F]̶\L zH!5>V!\t,A8Z;ٍ}֚j8a<]XowJ iuμiMݎ>)' m8q=H/ , S gEҸ?jRtZD gi6M/q֟3@ą&* Ǡ, 䯈0ŎJfa@oMɉp $2)=͊ MW~t$01Pl?5dqILc Gi0m/FM&$Z3dL;lD6RZ;MqkVur6А`(]T %~:VQޏ{tr{%Uzvf)/n/< .Z1ed%1PY5*0L('h5iEC>6VƫI/T K+.^|QXY2FܸYKJ(h(nLQ ̟E)NrF"!FfdaQz7Ø@>+U:hevjDfԯ徴̬8Imas(@֊ "&€3]׳pV;uG|c(n`x>Xlȕ_Ϧi<{Q 3-}v`h~ןYfЋ\ ?E"jgX׋,ۢ2Zp|D=H%M?WaA #^Y{}WE7^Z d(zR7d2n%pN3NuC( ߖJXUS /}Z,A){ ʿ֊OwfDgkrX7gp<:~/zsVՖQT2hHD&PJ@\ţgMF\<{^LCa S}8O5ZiQY%IFq Ys,gɝqXPf( g++%Rq2ߦa⦍V6}7McC|lk941VRt %O<\ۖ]0Y׶n΂#frp:w6ԤB+gsS’;VǗb7gE_))ZʛFnx1=#ZR\J VCt:~&Yw<12M8ˑJ_e&r`}>NZTǿ|t 6#h9ʟ?DI>F.\_nS^)?-+NҎZJD[!_= 3JPq6O /ToO>Gx5x{0Ta[3(`ܜv;n0PDꞘJ*M6]F,P|n əP\reSqX|lmser >Dܖz*=r|F&RSk ]q\(2ў@гS  9ў ;a$f~+@.1jH3W _릆h oZaRcxJ(CTϷ6\P"=BȒwz:A(6.w~6)8 P<ȩ$1YhƐ6 号\v% Zvu%C 4JJ(3AX1hqt˶n2:ǩQNoV/nU  7| M'h/Jz,M ܰ,^.߽LGϵ2%^e%d`<>/~Ha2Lxa CXsY/ΞCwh^` 1`j p蓮gL+ha xg+9C45YD&9$8lZc4iȄUC#Fnj}$YǕeE# U4I*C4!XTĨN3XF(SA{n}֭ U4I4dۺiņ`Ry:hbN[:suBCrTd:3E0sCscg3g0Ѷe/;"4(me!܈yƙPe&fpez4 >aS4 R#zՎoZz[MJWs]3eӹ7 ֌J}]7z^;.g`TblJQ +bta 6$Lbb+ N 55 lF皙8bWZ^^WWW.dtU@P.%֯'cկˏStW԰Rs (my T~mx4H/`5g3 2#Nk%\AHZ7F d;JkO5[RBރ'cH㔳ӫ"U3^q-nWq(s=97DZYJvyn:/T ˹b7Q=S ppCDVE0 &Q=ߢ=Zެ?'Qhjc:bkdcFȺcA!Y,B/gӹTϷ薏떑2RBS=S-Bys4RJ8:>Zc[)0yێ!u9jR=Z)"u*Q|J+TNG+՟ґgT$jZ9Hrfz.Klfl"d[4.MrS'冠 JJKܠs&SBKh$}YuLJcӛ{oĒr[KGq񳩼=mw*٭?zs$H}x?Dv_n2FVxGfyd RB fxu)M]65E;gi7P’Pq&>> U;rTqR/6;w \Ar`Fjd{^n]V\q5Y8'5^YkBމ?^ŋ6SHzסjLq Vbgdrl4@JBe1ߔRB/鰝f/[Fo1zP۵|5ywΨQRM÷J};,0}'pNG5k[[+$\s*usEs#E?#dL9/\ҫ` SO|jG vd{emx<2AZk2xP#y" C7Lj"ۅS'4<]mg7噥.>gpF8KF9%Vd !}2&SgX8FW_;mU#B-"=\ͨPi>l@)BRdF|bB }YtU0%hΊio%="15hнXo~1t3ZbQkҌǩLH|yw=`m?:J{ZY]FVkcVVdtL[:25Aeh`T:u :! t*b vStP"@'/4 U4E4acTbG"#$>mL7#t2vGh{2̈`ݧ츴M:Al(HPL>Fh{EYuo Lj“:ޅ'!f+j c > '@%٨ʼnj(ٮq]\C"hи[bϸDa|KAu4qJsjL"j8 "'舧,l9D|ja,lyӅ-mYʛA@dOqI50Y^>E:@RF!+M6TǗvlkH_X썄&e%0nQEkTfS=ߢZ{*(#h[q|."#|7D$5;bh- bfTadDdrQ3d 1\7(6yJ[e˻is)3Bj 0Y}yZ*Ʀo+\-9 Q8rs&*Ww}ɷ~-'ۛrXC,SX dQ=m%~(! DI;XIpB%E29T$NERA>[ҭ={# )BN)K!NEROeu"Jb?5IF"J[U7*aғ/2}>y ''ݭ䱕FwOmuvr즧3!a7pF.ϓoCux=W:<._H gZ'ݪgqw>!$ZW(C6w?ܢr>pAp3qT.f]mP7Oty òHV5] bgŕXwo&}V+>d*B 3 0SYf'F31׸Yiy\ٷ ڼ6bi(v2WN$"˟wH60T7o<1\}ˇlV#@TUP]*ϓ??'A& 8낧_"LJE/2f轟a|8nQ4𛎄!Zf͇oQxg0ba1 pI5ƀE'" Djg"P(]z%53CTTsz"Jڬf0: ѩїɗȶp]|TϷTD2R"tM5 G_ڣ/e~ "C$3$yd.HENSvTdlK1ť>no.uF.R=nKAQZLA:R&GVMNf&ԧYQK\ZK}zYA8卑NLu.6.%g9`\JIAMԹ:ι"ŃYAԹ"\Zfmӭ[y.?! `獡/ǥ HgJ;Ѐ;VЗf3O" <]حTv|VQBܐʯc]RAILD^Y8 SeH㔳oCeͳȞCD ZD$ٝ gjtuHLшN6 D~|93 Rt@Ӕ4*K29 C:` s‰tk>w1MteLEYǼnFJ9[PA1!n $wA33[lit9`#rnbQ,Ԛwu7T2$l ~eR5p0gRWm/|A6Y>v5q}kxA̦'Zl1U,[ b6mN&Db^ osMwܽ{Sq.Ern甤ALeḺ ZelE,̹{QcND12c5>B> ?5ɋJůckר߮QjҰgξ`yւtkoר3uz^ 54_PSĩAg2ZARtf9]CMpo :siXKF|Mp)9p5"ѭ84d2mq<&@2>vK+q9-[bԜ5>"Sb%0lǜZ5is.[(E dMpO߲ο{QX:&""&oyr)3J @hK1 Tǥ8 map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 21 08:59:06 crc kubenswrapper[4932]: body: Mar 21 08:59:06 crc kubenswrapper[4932]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:26.665334939 +0000 UTC m=+10.260533238,LastTimestamp:2026-03-21 08:58:26.665334939 +0000 UTC m=+10.260533238,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 08:59:06 crc kubenswrapper[4932]: > Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.151487 4932 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ecf8cc0ac82b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:26.665456312 +0000 UTC m=+10.260654621,LastTimestamp:2026-03-21 08:58:26.665456312 +0000 UTC m=+10.260654621,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.158538 4932 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 21 08:59:06 crc kubenswrapper[4932]: &Event{ObjectMeta:{kube-apiserver-crc.189ecf8ed98e6ff8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 21 08:59:06 crc kubenswrapper[4932]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 21 08:59:06 crc kubenswrapper[4932]: Mar 21 08:59:06 crc kubenswrapper[4932]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:35.672850424 +0000 UTC m=+19.268048713,LastTimestamp:2026-03-21 08:58:35.672850424 +0000 UTC m=+19.268048713,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 08:59:06 crc kubenswrapper[4932]: > Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.163884 4932 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ecf8ed9988574 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:35.673511284 +0000 UTC m=+19.268709573,LastTimestamp:2026-03-21 08:58:35.673511284 +0000 UTC m=+19.268709573,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.168577 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ecf8ed98e6ff8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 21 08:59:06 crc kubenswrapper[4932]: &Event{ObjectMeta:{kube-apiserver-crc.189ecf8ed98e6ff8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 21 08:59:06 crc kubenswrapper[4932]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 21 08:59:06 crc kubenswrapper[4932]: Mar 21 08:59:06 crc kubenswrapper[4932]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:35.672850424 +0000 UTC m=+19.268048713,LastTimestamp:2026-03-21 08:58:35.681732644 +0000 UTC m=+19.276930913,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 08:59:06 crc kubenswrapper[4932]: > Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.172140 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ecf8ed9988574\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ecf8ed9988574 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:35.673511284 +0000 UTC m=+19.268709573,LastTimestamp:2026-03-21 08:58:35.681782645 +0000 UTC m=+19.276980914,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.176844 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ecf8cc0aaa89b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 08:59:06 crc kubenswrapper[4932]: &Event{ObjectMeta:{kube-controller-manager-crc.189ecf8cc0aaa89b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 21 08:59:06 crc kubenswrapper[4932]: body: Mar 21 08:59:06 crc kubenswrapper[4932]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:26.665334939 +0000 UTC m=+10.260533238,LastTimestamp:2026-03-21 08:58:36.666056821 +0000 UTC m=+20.261255100,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 08:59:06 crc kubenswrapper[4932]: > Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.180781 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ecf8cc0ac82b8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ecf8cc0ac82b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:26.665456312 +0000 UTC m=+10.260654621,LastTimestamp:2026-03-21 08:58:36.666108843 +0000 UTC m=+20.261307122,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.186747 4932 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 08:59:06 crc kubenswrapper[4932]: &Event{ObjectMeta:{kube-controller-manager-crc.189ecf9168c85aff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 21 08:59:06 crc kubenswrapper[4932]: body: Mar 21 08:59:06 crc kubenswrapper[4932]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:46.665722623 +0000 UTC m=+30.260920892,LastTimestamp:2026-03-21 08:58:46.665722623 +0000 UTC m=+30.260920892,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 08:59:06 crc kubenswrapper[4932]: > Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.191519 4932 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ecf9168c96451 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:46.665790545 +0000 UTC m=+30.260988814,LastTimestamp:2026-03-21 08:58:46.665790545 +0000 UTC m=+30.260988814,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.196769 4932 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ecf9168f34d6c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:46.668537196 +0000 UTC m=+30.263735515,LastTimestamp:2026-03-21 08:58:46.668537196 +0000 UTC m=+30.263735515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.201280 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ecf8b4df361a2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ecf8b4df361a2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:20.445753762 +0000 UTC m=+4.040952071,LastTimestamp:2026-03-21 08:58:46.788181442 +0000 UTC m=+30.383379711,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.205924 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ecf8b64008e61\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ecf8b64008e61 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:20.815715937 +0000 UTC m=+4.410914226,LastTimestamp:2026-03-21 08:58:46.985401669 +0000 UTC m=+30.580599938,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.211319 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ecf8b64eb216e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ecf8b64eb216e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:20.831089006 +0000 UTC m=+4.426287275,LastTimestamp:2026-03-21 08:58:47.00130433 +0000 UTC m=+30.596502599,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.217786 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ecf9168c85aff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 08:59:06 crc kubenswrapper[4932]: &Event{ObjectMeta:{kube-controller-manager-crc.189ecf9168c85aff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 21 08:59:06 crc kubenswrapper[4932]: body: Mar 21 08:59:06 crc kubenswrapper[4932]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:46.665722623 +0000 UTC m=+30.260920892,LastTimestamp:2026-03-21 08:58:56.666123902 +0000 UTC m=+40.261322211,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 08:59:06 crc kubenswrapper[4932]: > Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.222813 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ecf9168c96451\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ecf9168c96451 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:46.665790545 +0000 UTC m=+30.260988814,LastTimestamp:2026-03-21 08:58:56.666235786 +0000 UTC m=+40.261434095,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 08:59:06 crc kubenswrapper[4932]: I0321 08:59:06.644052 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:06 crc kubenswrapper[4932]: I0321 08:59:06.665213 4932 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 08:59:06 crc kubenswrapper[4932]: I0321 08:59:06.665283 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 08:59:06 crc kubenswrapper[4932]: E0321 08:59:06.669205 4932 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ecf9168c85aff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 08:59:06 crc kubenswrapper[4932]: &Event{ObjectMeta:{kube-controller-manager-crc.189ecf9168c85aff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 21 08:59:06 crc kubenswrapper[4932]: body: Mar 21 08:59:06 crc kubenswrapper[4932]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 08:58:46.665722623 +0000 UTC m=+30.260920892,LastTimestamp:2026-03-21 08:59:06.665263741 +0000 UTC m=+50.260462010,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 08:59:06 crc kubenswrapper[4932]: > Mar 21 08:59:07 crc kubenswrapper[4932]: I0321 08:59:07.645027 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:08 crc kubenswrapper[4932]: E0321 08:59:08.132074 4932 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 08:59:08 crc kubenswrapper[4932]: I0321 08:59:08.643669 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:09 crc kubenswrapper[4932]: I0321 08:59:09.644448 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:10 crc kubenswrapper[4932]: I0321 08:59:10.646008 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:10 crc kubenswrapper[4932]: E0321 08:59:10.698257 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 21 08:59:10 crc kubenswrapper[4932]: I0321 08:59:10.700341 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:10 crc kubenswrapper[4932]: I0321 08:59:10.702810 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:10 crc kubenswrapper[4932]: I0321 08:59:10.702860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:10 crc kubenswrapper[4932]: I0321 08:59:10.702874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:10 crc kubenswrapper[4932]: I0321 08:59:10.702908 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 08:59:10 crc kubenswrapper[4932]: E0321 08:59:10.707451 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 21 08:59:11 crc kubenswrapper[4932]: I0321 08:59:11.644664 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:12 crc kubenswrapper[4932]: I0321 08:59:12.641743 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.140537 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.140700 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.142097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.142244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.142337 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.645809 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.672069 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.672227 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.673480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.673536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.673554 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:13 crc kubenswrapper[4932]: I0321 08:59:13.677263 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 08:59:14 crc kubenswrapper[4932]: I0321 08:59:14.002824 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:14 crc kubenswrapper[4932]: I0321 08:59:14.004058 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:14 crc kubenswrapper[4932]: I0321 08:59:14.004125 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:14 crc kubenswrapper[4932]: I0321 08:59:14.004138 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:14 crc kubenswrapper[4932]: I0321 08:59:14.644667 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:15 crc kubenswrapper[4932]: I0321 08:59:15.643703 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:16 crc kubenswrapper[4932]: I0321 08:59:16.644333 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:16 crc kubenswrapper[4932]: I0321 08:59:16.702209 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:16 crc kubenswrapper[4932]: I0321 08:59:16.703909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:16 crc kubenswrapper[4932]: I0321 08:59:16.703981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:16 crc kubenswrapper[4932]: I0321 08:59:16.704001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:16 crc kubenswrapper[4932]: I0321 08:59:16.705061 4932 scope.go:117] "RemoveContainer" containerID="f527126b1662223de406f7c65526257d55ac5c54f058058e5a316cbfc6a7386f" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.014880 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.016844 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec"} Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.017138 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.018187 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.018240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.018252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.644091 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:17 crc kubenswrapper[4932]: E0321 08:59:17.703052 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.707792 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.709744 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.709805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.709819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:17 crc kubenswrapper[4932]: I0321 08:59:17.709853 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 08:59:17 crc kubenswrapper[4932]: E0321 08:59:17.714477 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.021397 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.022045 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.024055 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" exitCode=255 Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.024113 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec"} Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.024184 4932 scope.go:117] "RemoveContainer" containerID="f527126b1662223de406f7c65526257d55ac5c54f058058e5a316cbfc6a7386f" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.024404 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.025586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.025638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.025652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.026694 4932 scope.go:117] "RemoveContainer" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" Mar 21 08:59:18 crc kubenswrapper[4932]: E0321 08:59:18.026998 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 08:59:18 crc kubenswrapper[4932]: E0321 08:59:18.133205 4932 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.642461 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:18 crc kubenswrapper[4932]: I0321 08:59:18.712093 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 08:59:19 crc kubenswrapper[4932]: I0321 08:59:19.030283 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 21 08:59:19 crc kubenswrapper[4932]: I0321 08:59:19.033516 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:19 crc kubenswrapper[4932]: I0321 08:59:19.035195 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:19 crc kubenswrapper[4932]: I0321 08:59:19.035228 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:19 crc kubenswrapper[4932]: I0321 08:59:19.035237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:19 crc kubenswrapper[4932]: I0321 08:59:19.035809 4932 scope.go:117] "RemoveContainer" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" Mar 21 08:59:19 crc kubenswrapper[4932]: E0321 08:59:19.035990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 08:59:19 crc kubenswrapper[4932]: I0321 08:59:19.643842 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:20 crc kubenswrapper[4932]: I0321 08:59:20.647939 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:21 crc kubenswrapper[4932]: I0321 08:59:21.644197 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:22 crc kubenswrapper[4932]: I0321 08:59:22.643238 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:23 crc kubenswrapper[4932]: I0321 08:59:23.643380 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.642330 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:24 crc kubenswrapper[4932]: E0321 08:59:24.707609 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.714741 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.716022 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.716061 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.716088 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.716110 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 08:59:24 crc kubenswrapper[4932]: E0321 08:59:24.721186 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.754368 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.754547 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.755687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.755722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.755735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.756311 4932 scope.go:117] "RemoveContainer" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" Mar 21 08:59:24 crc kubenswrapper[4932]: E0321 08:59:24.756532 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.807385 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 21 08:59:24 crc kubenswrapper[4932]: I0321 08:59:24.819978 4932 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 21 08:59:25 crc kubenswrapper[4932]: I0321 08:59:25.644054 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:26 crc kubenswrapper[4932]: I0321 08:59:26.644392 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:27 crc kubenswrapper[4932]: I0321 08:59:27.644697 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:28 crc kubenswrapper[4932]: E0321 08:59:28.134233 4932 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 08:59:28 crc kubenswrapper[4932]: I0321 08:59:28.643289 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:29 crc kubenswrapper[4932]: W0321 08:59:29.158457 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 21 08:59:29 crc kubenswrapper[4932]: E0321 08:59:29.158526 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 21 08:59:29 crc kubenswrapper[4932]: I0321 08:59:29.648013 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:30 crc kubenswrapper[4932]: I0321 08:59:30.644677 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:31 crc kubenswrapper[4932]: I0321 08:59:31.645516 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 08:59:31 crc kubenswrapper[4932]: E0321 08:59:31.714631 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 21 08:59:31 crc kubenswrapper[4932]: I0321 08:59:31.721939 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:31 crc kubenswrapper[4932]: I0321 08:59:31.723575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:31 crc kubenswrapper[4932]: I0321 08:59:31.723605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:31 crc kubenswrapper[4932]: I0321 08:59:31.723614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:31 crc kubenswrapper[4932]: I0321 08:59:31.723634 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 08:59:31 crc kubenswrapper[4932]: E0321 08:59:31.729605 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 21 08:59:32 crc kubenswrapper[4932]: I0321 08:59:32.605316 4932 csr.go:261] certificate signing request csr-rh6j6 is approved, waiting to be issued Mar 21 08:59:32 crc kubenswrapper[4932]: I0321 08:59:32.614011 4932 csr.go:257] certificate signing request csr-rh6j6 is issued Mar 21 08:59:32 crc kubenswrapper[4932]: I0321 08:59:32.718879 4932 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.214267 4932 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.424015 4932 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 21 08:59:33 crc kubenswrapper[4932]: W0321 08:59:33.424251 4932 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.615301 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-05 17:05:59.103912337 +0000 UTC Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.615393 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6224h6m25.48852302s for next certificate rotation Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.658592 4932 apiserver.go:52] "Watching apiserver" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.672957 4932 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.673407 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.673996 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.674070 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.674104 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.674277 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.674469 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.674462 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.674504 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.675038 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.675308 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.678665 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.678689 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.678814 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.678829 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.679020 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.678837 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.679158 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.679436 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.679500 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.719849 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.732832 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.740035 4932 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.744800 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.756858 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.773019 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.790455 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.804959 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826495 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826576 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826606 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826628 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826648 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826667 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826685 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826704 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826729 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.826746 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827089 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827154 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827200 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827211 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827222 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827246 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827401 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827428 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827447 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827466 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827499 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827525 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827545 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827544 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827566 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827585 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827603 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827619 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827641 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827658 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827707 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827734 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827733 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827815 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827765 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.827905 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.828095 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.828482 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.828655 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.828918 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829078 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829163 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829218 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829252 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.829289 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 08:59:34.329262377 +0000 UTC m=+77.924460916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829378 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829340 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829466 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829490 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829509 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829256 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829534 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829597 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829640 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829670 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829701 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829719 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829733 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829763 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829789 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829810 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829839 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829872 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829895 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829916 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829942 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829961 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829978 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830000 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830018 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830035 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830051 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830070 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830086 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830105 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830124 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830141 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830162 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830192 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830213 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830229 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830246 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830280 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830303 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830318 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830334 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830362 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830379 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830395 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830413 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830707 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830992 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831013 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831031 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831050 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831068 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831089 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831108 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831127 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831144 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831165 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831181 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831198 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831216 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831235 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831251 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831267 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831283 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831300 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831315 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831331 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831391 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831414 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831430 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831448 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831464 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831483 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831501 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831518 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831535 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831569 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831588 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831608 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831630 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831650 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831668 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831688 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831725 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831745 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831763 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831780 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831798 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831815 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831837 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831854 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831872 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829811 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831890 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829850 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831907 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829989 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.829911 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830420 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830854 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.830927 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831114 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831978 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831176 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831201 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831570 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831713 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.832181 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.832216 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.832262 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.832760 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.832843 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.832848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.831923 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.832933 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833045 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833112 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833149 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833189 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833226 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833266 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833301 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833336 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833373 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833413 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833455 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833485 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833491 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833775 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833816 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833758 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833850 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833857 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833887 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833921 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833951 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.833982 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834011 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834040 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834054 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834067 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834131 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834155 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834294 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834299 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834409 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834452 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834474 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834528 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834560 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834585 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834612 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834639 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834668 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834681 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834723 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834871 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834975 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.835088 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.835145 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.835146 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.835170 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.835537 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.835400 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836206 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836231 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.834699 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836374 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836406 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836451 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836469 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836467 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836491 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836524 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836544 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836562 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836578 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836594 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836615 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836622 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836636 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836678 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836696 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836699 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836715 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836736 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836754 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836772 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836790 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836811 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836828 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836845 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836861 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836887 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836914 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836947 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836971 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.836994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837012 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837030 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837055 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837065 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837074 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837092 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837136 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837168 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837237 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837276 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837302 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837323 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837368 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837389 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837411 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837468 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837493 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837510 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837530 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837546 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837641 4932 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837652 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837663 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837673 4932 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837686 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837697 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837708 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837718 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837729 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837740 4932 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837752 4932 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837762 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837773 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837783 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837795 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837807 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837819 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837829 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837839 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837850 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837859 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837869 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837879 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837890 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837900 4932 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837911 4932 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837921 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837938 4932 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837949 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837959 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837969 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837978 4932 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837993 4932 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838004 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838015 4932 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838025 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838034 4932 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838043 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838055 4932 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838065 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838075 4932 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838085 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838111 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838120 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838130 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838141 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838151 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.839963 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840106 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840140 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840488 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.842422 4932 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.842564 4932 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.843048 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837133 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837139 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837161 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837710 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837730 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.837956 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838225 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838434 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838449 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838639 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838654 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838714 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838733 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.838828 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.839116 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.839259 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.839296 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.839597 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.839694 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.839783 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840123 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840151 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840250 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840246 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840385 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840450 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840480 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840573 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.840608 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.840651 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840770 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840851 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.840968 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.841031 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.841076 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.841098 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.843109 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.844636 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.844928 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.845105 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.845271 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.845293 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.845338 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.845461 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.846284 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.846597 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.846616 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.846724 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.847009 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.847077 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.847390 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.847490 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.847623 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.847683 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.847900 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.848130 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.848153 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.848218 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.848534 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.848632 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.848778 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.848894 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.848923 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.849040 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.849556 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.849732 4932 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.849775 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:34.34975299 +0000 UTC m=+77.944951449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.849836 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:34.349807772 +0000 UTC m=+77.945006041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.849962 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850159 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850193 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850213 4932 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850228 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850243 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850261 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850280 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850294 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.850995 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.853283 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.855485 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.858021 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.859029 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.859907 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.860014 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.860047 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.860064 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.860128 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:34.360109235 +0000 UTC m=+77.955307714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.860182 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.860652 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.860769 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.861067 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.861132 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.861192 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.861937 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.862044 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.862538 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.862560 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.862848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.863853 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.866277 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.866568 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.866907 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.866948 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.866967 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.866993 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.867015 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:33 crc kubenswrapper[4932]: E0321 08:59:33.867101 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:34.367073167 +0000 UTC m=+77.962271456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.867529 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.867671 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.869342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.869716 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.869909 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.870332 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.870468 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.870649 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.870714 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.870764 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.870966 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.871704 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.872080 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.872133 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.872475 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.872591 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.872868 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.872895 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.873093 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.873278 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.873593 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.873747 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.873965 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.874077 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.874206 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.874381 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.874747 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.874840 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.874879 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.875100 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.875505 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.875533 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.875543 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.875575 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.875613 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.875631 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.875962 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.876136 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.876172 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.876341 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.877056 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.877753 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.877841 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.877969 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.878154 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.880629 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.881849 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.883162 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.884759 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.898271 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.904275 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.907836 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.909958 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951384 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951433 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951492 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951505 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951517 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951526 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951536 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951544 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951554 4932 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951563 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951574 4932 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951584 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951596 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951606 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951617 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951627 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951637 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951646 4932 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951654 4932 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951663 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951672 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951680 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951689 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951699 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951706 4932 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951691 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951715 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951828 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951829 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951852 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951959 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951973 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951985 4932 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.951997 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952012 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952023 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952033 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952044 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952055 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952066 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952077 4932 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952088 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952098 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952108 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952119 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952131 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952141 4932 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952151 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952166 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952177 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952193 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952204 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952213 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952222 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952235 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952245 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952255 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952266 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952275 4932 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952286 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952295 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952306 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952316 4932 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952325 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952335 4932 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952362 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952373 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952384 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952406 4932 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952422 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952434 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952447 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952460 4932 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952471 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952484 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952493 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952503 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952517 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952530 4932 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952542 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952556 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952567 4932 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952576 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952585 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952595 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952606 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952619 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952632 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952645 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952659 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952670 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952681 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952693 4932 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952706 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952720 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952733 4932 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952746 4932 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952758 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952770 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952779 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952794 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952806 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952819 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952832 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952846 4932 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952859 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952871 4932 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952883 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952897 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952910 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952924 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952939 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952950 4932 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952962 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952975 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.952989 4932 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953001 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953013 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953026 4932 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953039 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953053 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953067 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953079 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953092 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953106 4932 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953118 4932 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953134 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953148 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953160 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953174 4932 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953187 4932 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953200 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953212 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953224 4932 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953237 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953249 4932 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.953260 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 21 08:59:33 crc kubenswrapper[4932]: I0321 08:59:33.999618 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.016889 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:34 crc kubenswrapper[4932]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 08:59:34 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 08:59:34 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: source "/env/_master" Mar 21 08:59:34 crc kubenswrapper[4932]: set +o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: fi Mar 21 08:59:34 crc kubenswrapper[4932]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 21 08:59:34 crc kubenswrapper[4932]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 21 08:59:34 crc kubenswrapper[4932]: ho_enable="--enable-hybrid-overlay" Mar 21 08:59:34 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 21 08:59:34 crc kubenswrapper[4932]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 21 08:59:34 crc kubenswrapper[4932]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 21 08:59:34 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 08:59:34 crc kubenswrapper[4932]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 21 08:59:34 crc kubenswrapper[4932]: --webhook-host=127.0.0.1 \ Mar 21 08:59:34 crc kubenswrapper[4932]: --webhook-port=9743 \ Mar 21 08:59:34 crc kubenswrapper[4932]: ${ho_enable} \ Mar 21 08:59:34 crc kubenswrapper[4932]: --enable-interconnect \ Mar 21 08:59:34 crc kubenswrapper[4932]: --disable-approver \ Mar 21 08:59:34 crc kubenswrapper[4932]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 21 08:59:34 crc kubenswrapper[4932]: --wait-for-kubernetes-api=200s \ Mar 21 08:59:34 crc kubenswrapper[4932]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 21 08:59:34 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 08:59:34 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:34 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.017718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.021014 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:34 crc kubenswrapper[4932]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 08:59:34 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 08:59:34 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: source "/env/_master" Mar 21 08:59:34 crc kubenswrapper[4932]: set +o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: fi Mar 21 08:59:34 crc kubenswrapper[4932]: Mar 21 08:59:34 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 21 08:59:34 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 08:59:34 crc kubenswrapper[4932]: --disable-webhook \ Mar 21 08:59:34 crc kubenswrapper[4932]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 21 08:59:34 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 08:59:34 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:34 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.022114 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.026677 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 08:59:34 crc kubenswrapper[4932]: W0321 08:59:34.029663 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-785c5aaa1079584b74196c72c3c6791e8a8a7b01e458602239ce5025a5a2d1e5 WatchSource:0}: Error finding container 785c5aaa1079584b74196c72c3c6791e8a8a7b01e458602239ce5025a5a2d1e5: Status 404 returned error can't find the container with id 785c5aaa1079584b74196c72c3c6791e8a8a7b01e458602239ce5025a5a2d1e5 Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.033945 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.035940 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 21 08:59:34 crc kubenswrapper[4932]: W0321 08:59:34.037206 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-cdbfc3c57e6c42e6bfaebb7c457da5df1b9937427a29fa697d009a42b329cb5e WatchSource:0}: Error finding container cdbfc3c57e6c42e6bfaebb7c457da5df1b9937427a29fa697d009a42b329cb5e: Status 404 returned error can't find the container with id cdbfc3c57e6c42e6bfaebb7c457da5df1b9937427a29fa697d009a42b329cb5e Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.039636 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:34 crc kubenswrapper[4932]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 21 08:59:34 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 21 08:59:34 crc kubenswrapper[4932]: source /etc/kubernetes/apiserver-url.env Mar 21 08:59:34 crc kubenswrapper[4932]: else Mar 21 08:59:34 crc kubenswrapper[4932]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 21 08:59:34 crc kubenswrapper[4932]: exit 1 Mar 21 08:59:34 crc kubenswrapper[4932]: fi Mar 21 08:59:34 crc kubenswrapper[4932]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 21 08:59:34 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:34 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.040860 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.074710 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cdbfc3c57e6c42e6bfaebb7c457da5df1b9937427a29fa697d009a42b329cb5e"} Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.076213 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"785c5aaa1079584b74196c72c3c6791e8a8a7b01e458602239ce5025a5a2d1e5"} Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.077488 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:34 crc kubenswrapper[4932]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 21 08:59:34 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 21 08:59:34 crc kubenswrapper[4932]: source /etc/kubernetes/apiserver-url.env Mar 21 08:59:34 crc kubenswrapper[4932]: else Mar 21 08:59:34 crc kubenswrapper[4932]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 21 08:59:34 crc kubenswrapper[4932]: exit 1 Mar 21 08:59:34 crc kubenswrapper[4932]: fi Mar 21 08:59:34 crc kubenswrapper[4932]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 21 08:59:34 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:34 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.077583 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.077778 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f47580bf0d9084625f1501d3280aece30bd5e399f2e23d9598b0fd3babeed0d4"} Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.079142 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.079191 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.079455 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:34 crc kubenswrapper[4932]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 08:59:34 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 08:59:34 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: source "/env/_master" Mar 21 08:59:34 crc kubenswrapper[4932]: set +o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: fi Mar 21 08:59:34 crc kubenswrapper[4932]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 21 08:59:34 crc kubenswrapper[4932]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 21 08:59:34 crc kubenswrapper[4932]: ho_enable="--enable-hybrid-overlay" Mar 21 08:59:34 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 21 08:59:34 crc kubenswrapper[4932]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 21 08:59:34 crc kubenswrapper[4932]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 21 08:59:34 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 08:59:34 crc kubenswrapper[4932]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 21 08:59:34 crc kubenswrapper[4932]: --webhook-host=127.0.0.1 \ Mar 21 08:59:34 crc kubenswrapper[4932]: --webhook-port=9743 \ Mar 21 08:59:34 crc kubenswrapper[4932]: ${ho_enable} \ Mar 21 08:59:34 crc kubenswrapper[4932]: --enable-interconnect \ Mar 21 08:59:34 crc kubenswrapper[4932]: --disable-approver \ Mar 21 08:59:34 crc kubenswrapper[4932]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 21 08:59:34 crc kubenswrapper[4932]: --wait-for-kubernetes-api=200s \ Mar 21 08:59:34 crc kubenswrapper[4932]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 21 08:59:34 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 08:59:34 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:34 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.082031 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:34 crc kubenswrapper[4932]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 08:59:34 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 08:59:34 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: source "/env/_master" Mar 21 08:59:34 crc kubenswrapper[4932]: set +o allexport Mar 21 08:59:34 crc kubenswrapper[4932]: fi Mar 21 08:59:34 crc kubenswrapper[4932]: Mar 21 08:59:34 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 21 08:59:34 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 08:59:34 crc kubenswrapper[4932]: --disable-webhook \ Mar 21 08:59:34 crc kubenswrapper[4932]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 21 08:59:34 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 08:59:34 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:34 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.083283 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.094693 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.106899 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.117275 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.126577 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.137479 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.148528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.159380 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.168136 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.180336 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.190469 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.200316 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.210081 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.357545 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.357771 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.357844 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 08:59:35.357799671 +0000 UTC m=+78.952997980 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.357974 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.357976 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.358073 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:35.358049079 +0000 UTC m=+78.953247388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.358121 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.358361 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:35.358317567 +0000 UTC m=+78.953515826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.458861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:34 crc kubenswrapper[4932]: I0321 08:59:34.458925 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.459064 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.459083 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.459096 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.459107 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.459113 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.459124 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.459173 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:35.459156462 +0000 UTC m=+79.054354731 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:34 crc kubenswrapper[4932]: E0321 08:59:34.459191 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:35.459183513 +0000 UTC m=+79.054381772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.367898 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.368085 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 08:59:37.368048147 +0000 UTC m=+80.963246456 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.368142 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.368218 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.368333 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.368438 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.368474 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:37.368456249 +0000 UTC m=+80.963654558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.368520 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:37.36849579 +0000 UTC m=+80.963694109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.469668 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.469802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.470020 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.470078 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.470110 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.470026 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.470197 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.470208 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:37.470177141 +0000 UTC m=+81.065375450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.470223 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.470294 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:37.470269424 +0000 UTC m=+81.065467733 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.701908 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.701908 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.702149 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.702097 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.702237 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:35 crc kubenswrapper[4932]: E0321 08:59:35.702548 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.708483 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.710676 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.713584 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.714998 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.716382 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.717062 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.717833 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.719179 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.719998 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.722690 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.723328 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.724247 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.724762 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.725325 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.725887 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.726524 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.727242 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.727678 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.728245 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.728976 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.729502 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.730119 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.730666 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.731457 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.731921 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.733963 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.735905 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.737911 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.739015 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.740075 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.740672 4932 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.740806 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.742222 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.744090 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.744635 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.746642 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.748155 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.749296 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.750778 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.751746 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.753280 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.754852 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.756525 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.757961 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.759187 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.760516 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.761861 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.763530 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.764206 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.764843 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.766098 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.766983 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.768147 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.768681 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 21 08:59:35 crc kubenswrapper[4932]: I0321 08:59:35.823865 4932 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.389152 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.389270 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.389342 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 08:59:41.389306081 +0000 UTC m=+84.984504390 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.389431 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.389459 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.389511 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:41.389500307 +0000 UTC m=+84.984698586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.389560 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.389612 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:41.38959772 +0000 UTC m=+84.984796029 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.490524 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.490614 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.490828 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.490856 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.490878 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.490862 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.490935 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.490954 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.490982 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:41.490954081 +0000 UTC m=+85.086152380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.491095 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:41.491011782 +0000 UTC m=+85.086210061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.701997 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.702273 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.702814 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.702823 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.703168 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.703318 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.718433 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.726916 4932 scope.go:117] "RemoveContainer" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" Mar 21 08:59:37 crc kubenswrapper[4932]: E0321 08:59:37.727290 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.728438 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.739657 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.758797 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.773957 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.790371 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:37 crc kubenswrapper[4932]: I0321 08:59:37.805239 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.091867 4932 scope.go:117] "RemoveContainer" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" Mar 21 08:59:38 crc kubenswrapper[4932]: E0321 08:59:38.092220 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.730254 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.732069 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.732127 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.732143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.732233 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.742507 4932 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.742653 4932 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.744106 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.744163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.744181 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.744203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.744217 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:38Z","lastTransitionTime":"2026-03-21T08:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:38 crc kubenswrapper[4932]: E0321 08:59:38.759551 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.764204 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.764246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.764259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.764284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.764301 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:38Z","lastTransitionTime":"2026-03-21T08:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:38 crc kubenswrapper[4932]: E0321 08:59:38.777713 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.781693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.781746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.781766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.781791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.781807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:38Z","lastTransitionTime":"2026-03-21T08:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:38 crc kubenswrapper[4932]: E0321 08:59:38.794411 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.798987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.799044 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.799054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.799074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.799084 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:38Z","lastTransitionTime":"2026-03-21T08:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:38 crc kubenswrapper[4932]: E0321 08:59:38.811975 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.817033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.817077 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.817091 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.817109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.817121 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:38Z","lastTransitionTime":"2026-03-21T08:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:38 crc kubenswrapper[4932]: E0321 08:59:38.828830 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:38 crc kubenswrapper[4932]: E0321 08:59:38.829013 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.831005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.831054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.831070 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.831090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.831105 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:38Z","lastTransitionTime":"2026-03-21T08:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.934906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.934951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.934961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.935053 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:38 crc kubenswrapper[4932]: I0321 08:59:38.935072 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:38Z","lastTransitionTime":"2026-03-21T08:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.037554 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.037595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.037603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.037617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.037628 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.139430 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.139466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.139484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.139505 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.139516 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.242138 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.242166 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.242175 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.242188 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.242197 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.345580 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.345654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.345674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.345699 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.345716 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.448385 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.448434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.448446 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.448464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.448482 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.551907 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.551959 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.551980 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.552011 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.552024 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.655251 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.655331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.655380 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.655410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.655433 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.702207 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:39 crc kubenswrapper[4932]: E0321 08:59:39.702382 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.702435 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.702485 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:39 crc kubenswrapper[4932]: E0321 08:59:39.702695 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:39 crc kubenswrapper[4932]: E0321 08:59:39.702767 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.710477 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.759219 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.759270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.759285 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.759307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.759320 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.862645 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.862678 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.862686 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.862699 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.862708 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.964758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.964820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.964860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.964889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:39 crc kubenswrapper[4932]: I0321 08:59:39.964912 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:39Z","lastTransitionTime":"2026-03-21T08:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.067534 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.067595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.067632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.067669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.067692 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.171203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.171291 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.171310 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.171340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.171397 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.274162 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.274266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.274285 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.274304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.274317 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.376212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.376265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.376278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.376298 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.376312 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.479232 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.479433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.479525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.479558 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.479617 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.583084 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.583160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.583177 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.583207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.583226 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.685736 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.685793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.685802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.685817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.685827 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.788665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.788713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.788725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.788742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.788755 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.892384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.892432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.892442 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.892461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.892474 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.995013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.995057 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.995067 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.995084 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:40 crc kubenswrapper[4932]: I0321 08:59:40.995094 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:40Z","lastTransitionTime":"2026-03-21T08:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.097840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.097890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.097902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.097918 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.097927 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.199918 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.199949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.199958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.199973 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.199984 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.302841 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.302880 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.302890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.302903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.302914 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.424239 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.424314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.424337 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.424401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.424426 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.431722 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.432061 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.432124 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.432220 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.432284 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:49.432265783 +0000 UTC m=+93.027464042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.432304 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 08:59:49.432294494 +0000 UTC m=+93.027492763 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.432457 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.432556 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:49.432528691 +0000 UTC m=+93.027727140 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.528188 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.528236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.528249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.528270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.528286 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.532860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.532936 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.533094 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.533119 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.533137 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.533136 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.533172 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.533191 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.533209 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:49.533190461 +0000 UTC m=+93.128388740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.533262 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 08:59:49.533232392 +0000 UTC m=+93.128430701 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.630403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.630463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.630477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.630502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.630588 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.702205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.702236 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.702387 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.702561 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.702653 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:41 crc kubenswrapper[4932]: E0321 08:59:41.702770 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.733795 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.733831 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.733840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.733858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.733878 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.836475 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.836524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.836544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.836569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.836586 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.939011 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.939075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.939092 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.939114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:41 crc kubenswrapper[4932]: I0321 08:59:41.939128 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:41Z","lastTransitionTime":"2026-03-21T08:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.042040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.042120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.042141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.042172 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.042192 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.145041 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.145114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.145128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.145152 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.145168 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.249150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.249300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.249342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.249425 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.249452 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.353478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.353557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.353570 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.353586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.353597 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.455732 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.455809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.455820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.455838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.455864 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.559265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.559396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.559432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.559465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.559488 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.663327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.663389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.663400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.663425 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.663437 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.765437 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.765485 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.765497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.765514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.765524 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.868719 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.868814 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.868833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.868858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.868876 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.972368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.972434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.972443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.972482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:42 crc kubenswrapper[4932]: I0321 08:59:42.972492 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:42Z","lastTransitionTime":"2026-03-21T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.074850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.074879 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.074887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.074904 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.074913 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.177443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.177489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.177501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.177518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.177531 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.281515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.281570 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.281586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.281605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.281616 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.384487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.384553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.384564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.384581 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.384593 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.487000 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.487049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.487062 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.487081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.487094 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.590333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.590433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.590458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.590482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.590498 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.693948 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.694013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.694031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.694062 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.694082 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.702246 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.702317 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:43 crc kubenswrapper[4932]: E0321 08:59:43.702419 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:43 crc kubenswrapper[4932]: E0321 08:59:43.702512 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.702246 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:43 crc kubenswrapper[4932]: E0321 08:59:43.702622 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.797147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.797225 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.797240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.797261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.797277 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.899828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.899876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.899887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.899911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:43 crc kubenswrapper[4932]: I0321 08:59:43.899925 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:43Z","lastTransitionTime":"2026-03-21T08:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.002867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.002944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.002967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.002997 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.003019 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.106100 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.106170 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.106191 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.106219 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.106244 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.209012 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.209088 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.209113 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.209150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.209171 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.312396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.312472 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.312492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.312524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.312542 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.415101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.415169 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.415229 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.415262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.415292 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.519518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.519677 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.519709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.519784 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.519807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.622992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.623037 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.623049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.623070 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.623083 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.725599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.725679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.725690 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.725715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.725741 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.829412 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.829471 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.829481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.829503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.829521 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.933379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.933465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.933491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.933524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:44 crc kubenswrapper[4932]: I0321 08:59:44.933547 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:44Z","lastTransitionTime":"2026-03-21T08:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.036989 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.037058 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.037083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.037111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.037132 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.140710 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.140840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.140860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.140890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.140950 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.244416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.244490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.244515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.244546 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.244570 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.348081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.348156 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.348183 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.348212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.348236 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.451444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.451531 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.451551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.451585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.451604 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.554586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.554625 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.554643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.554666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.554685 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.657384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.657464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.657475 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.657492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.657503 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.701831 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.702190 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.702239 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:45 crc kubenswrapper[4932]: E0321 08:59:45.702378 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:45 crc kubenswrapper[4932]: E0321 08:59:45.702491 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:45 crc kubenswrapper[4932]: E0321 08:59:45.702598 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:45 crc kubenswrapper[4932]: E0321 08:59:45.705245 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:45 crc kubenswrapper[4932]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 08:59:45 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 08:59:45 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:45 crc kubenswrapper[4932]: source "/env/_master" Mar 21 08:59:45 crc kubenswrapper[4932]: set +o allexport Mar 21 08:59:45 crc kubenswrapper[4932]: fi Mar 21 08:59:45 crc kubenswrapper[4932]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 21 08:59:45 crc kubenswrapper[4932]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 21 08:59:45 crc kubenswrapper[4932]: ho_enable="--enable-hybrid-overlay" Mar 21 08:59:45 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 21 08:59:45 crc kubenswrapper[4932]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 21 08:59:45 crc kubenswrapper[4932]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 21 08:59:45 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 08:59:45 crc kubenswrapper[4932]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 21 08:59:45 crc kubenswrapper[4932]: --webhook-host=127.0.0.1 \ Mar 21 08:59:45 crc kubenswrapper[4932]: --webhook-port=9743 \ Mar 21 08:59:45 crc kubenswrapper[4932]: ${ho_enable} \ Mar 21 08:59:45 crc kubenswrapper[4932]: --enable-interconnect \ Mar 21 08:59:45 crc kubenswrapper[4932]: --disable-approver \ Mar 21 08:59:45 crc kubenswrapper[4932]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 21 08:59:45 crc kubenswrapper[4932]: --wait-for-kubernetes-api=200s \ Mar 21 08:59:45 crc kubenswrapper[4932]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 21 08:59:45 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 08:59:45 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:45 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:45 crc kubenswrapper[4932]: E0321 08:59:45.708214 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:45 crc kubenswrapper[4932]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 08:59:45 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 08:59:45 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:45 crc kubenswrapper[4932]: source "/env/_master" Mar 21 08:59:45 crc kubenswrapper[4932]: set +o allexport Mar 21 08:59:45 crc kubenswrapper[4932]: fi Mar 21 08:59:45 crc kubenswrapper[4932]: Mar 21 08:59:45 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 21 08:59:45 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 08:59:45 crc kubenswrapper[4932]: --disable-webhook \ Mar 21 08:59:45 crc kubenswrapper[4932]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 21 08:59:45 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 08:59:45 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:45 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:45 crc kubenswrapper[4932]: E0321 08:59:45.709442 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.760167 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.760233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.760248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.760271 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.760287 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.862796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.862856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.862871 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.862890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.862901 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.965178 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.965248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.965269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.965296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:45 crc kubenswrapper[4932]: I0321 08:59:45.965310 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:45Z","lastTransitionTime":"2026-03-21T08:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.068404 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.068452 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.068461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.068479 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.068491 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.172448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.172516 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.172559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.172588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.172603 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.276517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.276613 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.276640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.276681 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.276707 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.380628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.380694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.380712 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.380742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.380761 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.484323 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.484373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.484382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.484397 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.484410 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.588020 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.588094 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.588109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.588129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.588143 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.690287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.690395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.690408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.690434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.690448 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: E0321 08:59:46.703588 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 08:59:46 crc kubenswrapper[4932]: E0321 08:59:46.703707 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:46 crc kubenswrapper[4932]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 21 08:59:46 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:46 crc kubenswrapper[4932]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 21 08:59:46 crc kubenswrapper[4932]: source /etc/kubernetes/apiserver-url.env Mar 21 08:59:46 crc kubenswrapper[4932]: else Mar 21 08:59:46 crc kubenswrapper[4932]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 21 08:59:46 crc kubenswrapper[4932]: exit 1 Mar 21 08:59:46 crc kubenswrapper[4932]: fi Mar 21 08:59:46 crc kubenswrapper[4932]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 21 08:59:46 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:46 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:46 crc kubenswrapper[4932]: E0321 08:59:46.704797 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 21 08:59:46 crc kubenswrapper[4932]: E0321 08:59:46.704853 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.795875 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.795948 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.795967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.795995 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.796015 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.900136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.900205 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.900223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.900257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:46 crc kubenswrapper[4932]: I0321 08:59:46.900275 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:46Z","lastTransitionTime":"2026-03-21T08:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.003413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.003518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.003535 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.003562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.003580 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.107237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.107342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.107396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.107429 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.107449 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.211595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.211657 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.211676 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.211706 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.211725 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.315808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.315886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.315902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.315932 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.315947 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.418601 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.418661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.418673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.418692 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.418706 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.521199 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.521232 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.521241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.521255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.521263 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.624844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.624917 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.624935 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.624954 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.624968 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.701875 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.701954 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:47 crc kubenswrapper[4932]: E0321 08:59:47.702000 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:47 crc kubenswrapper[4932]: E0321 08:59:47.702110 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.702175 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:47 crc kubenswrapper[4932]: E0321 08:59:47.702327 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.710121 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.724851 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.727765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.727817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.727832 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.727891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.727913 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.737382 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.747881 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.759589 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.775333 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.787542 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.796150 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.830177 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.830265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.830283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.830306 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.830321 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.935176 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.935233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.935246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.935264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:47 crc kubenswrapper[4932]: I0321 08:59:47.935281 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:47Z","lastTransitionTime":"2026-03-21T08:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.037147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.037192 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.037205 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.037223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.037235 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.140120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.140514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.140527 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.140546 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.140559 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.243156 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.243224 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.243237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.243252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.243262 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.346087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.346134 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.346147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.346165 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.346177 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.414505 4932 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.448144 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.448196 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.448211 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.448230 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.448245 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.550885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.550920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.550928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.550945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.550954 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.654237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.654296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.654314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.654336 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.654389 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.757874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.757975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.758014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.758047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.758062 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.860912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.860956 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.860965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.860983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.860993 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.963644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.963692 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.963701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.963718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:48 crc kubenswrapper[4932]: I0321 08:59:48.963729 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:48Z","lastTransitionTime":"2026-03-21T08:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.066651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.066710 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.066724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.066746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.066757 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.169925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.169961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.169971 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.169986 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.169996 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.192460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.192548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.192569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.192600 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.192638 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.212994 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.218214 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.218252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.218267 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.218289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.218303 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.234545 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.241588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.241642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.241652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.241674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.241690 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.266026 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.272994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.273035 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.273048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.273066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.273077 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.282887 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.287500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.287547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.287560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.287577 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.287589 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.297557 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.297754 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.299504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.299552 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.299563 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.299584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.299596 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.402315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.402396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.402412 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.402431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.402443 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.505708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.505762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.505777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.505811 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.505831 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.509064 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.509140 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.509173 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.509253 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.509258 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:00:05.509231145 +0000 UTC m=+109.104429414 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.509313 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.509326 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 09:00:05.509306228 +0000 UTC m=+109.104504687 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.509369 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 09:00:05.509342379 +0000 UTC m=+109.104540648 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.609077 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.609141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.609155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.609172 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.609183 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.610761 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.610828 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.610983 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.611043 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.611055 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.611100 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 09:00:05.611086242 +0000 UTC m=+109.206284511 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.610988 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.611561 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.611587 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.611659 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 09:00:05.611639279 +0000 UTC m=+109.206837548 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.701690 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.701711 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.701764 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.702253 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.702338 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:49 crc kubenswrapper[4932]: E0321 08:59:49.702102 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.711642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.711677 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.711689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.711714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.711754 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.814380 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.814421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.814433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.814450 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.814460 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.916771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.916816 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.916828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.916844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:49 crc kubenswrapper[4932]: I0321 08:59:49.916855 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:49Z","lastTransitionTime":"2026-03-21T08:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.019439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.019751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.019817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.019899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.019985 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.124121 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.124783 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.124883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.124970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.125057 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.227937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.227988 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.227998 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.228018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.228029 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.338298 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.338475 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.338525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.338567 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.338596 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.442101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.442747 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.442875 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.442995 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.443112 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.546943 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.547557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.547594 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.547621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.547641 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.650970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.651582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.651752 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.651924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.652160 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.702859 4932 scope.go:117] "RemoveContainer" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" Mar 21 08:59:50 crc kubenswrapper[4932]: E0321 08:59:50.703993 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.755856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.755897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.755907 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.755926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.755939 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.859297 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.859367 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.859381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.859402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.859419 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.962434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.962516 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.962538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.962572 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:50 crc kubenswrapper[4932]: I0321 08:59:50.962594 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:50Z","lastTransitionTime":"2026-03-21T08:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.066176 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.066256 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.066273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.066303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.066323 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.169010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.169061 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.169080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.169106 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.169126 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.272014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.272058 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.272066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.272083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.272095 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.375553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.375627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.375642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.375663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.375701 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.479115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.479208 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.479237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.479274 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.479300 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.582759 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.582833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.582852 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.582884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.582904 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.685621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.685719 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.685742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.685778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.685802 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.702273 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.702320 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.702340 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:51 crc kubenswrapper[4932]: E0321 08:59:51.702526 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:51 crc kubenswrapper[4932]: E0321 08:59:51.702671 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:51 crc kubenswrapper[4932]: E0321 08:59:51.702776 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.789691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.789772 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.789794 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.789832 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.789855 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.893514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.893566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.893577 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.893598 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.893612 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.996261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.996658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.996735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.996842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:51 crc kubenswrapper[4932]: I0321 08:59:51.996933 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:51Z","lastTransitionTime":"2026-03-21T08:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.099681 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.099728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.099739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.099758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.099770 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.202477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.202546 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.202559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.202581 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.202594 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.305326 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.305423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.305435 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.305452 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.305466 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.408588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.408640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.408657 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.408682 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.408700 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.511162 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.511207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.511219 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.511259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.511289 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.614486 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.614532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.614543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.614563 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.614577 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.717547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.717610 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.717630 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.717661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.717674 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.821044 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.821115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.821129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.821152 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.821166 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.923912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.923979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.923999 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.924029 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:52 crc kubenswrapper[4932]: I0321 08:59:52.924047 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:52Z","lastTransitionTime":"2026-03-21T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.036671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.036713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.036721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.036741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.036752 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.139401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.139453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.139465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.139485 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.139499 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.242448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.242494 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.242504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.242527 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.242540 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.345636 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.345709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.345728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.345751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.345763 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.448688 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.448744 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.448757 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.448778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.448791 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.552613 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.552663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.552674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.552696 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.552710 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.656322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.657182 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.657723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.658001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.658198 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.702175 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.702178 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.702205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:53 crc kubenswrapper[4932]: E0321 08:59:53.702612 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:53 crc kubenswrapper[4932]: E0321 08:59:53.702374 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:53 crc kubenswrapper[4932]: E0321 08:59:53.702631 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.760705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.761040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.761155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.761245 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.761307 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.865400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.865446 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.865455 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.865472 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.865483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.968751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.968808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.968820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.968841 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:53 crc kubenswrapper[4932]: I0321 08:59:53.968853 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:53Z","lastTransitionTime":"2026-03-21T08:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.071214 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.071276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.071285 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.071300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.071311 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.173422 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.173494 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.173513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.173544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.173564 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.275936 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.275977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.275988 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.276008 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.276021 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.380663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.380759 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.380782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.380814 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.380837 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.484043 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.484097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.484109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.484130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.484142 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.588424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.588489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.588509 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.588534 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.588557 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.693473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.693558 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.693581 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.693620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.693644 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.797323 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.797440 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.797459 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.797489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.797507 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.900626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.900698 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.900718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.900758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:54 crc kubenswrapper[4932]: I0321 08:59:54.900795 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:54Z","lastTransitionTime":"2026-03-21T08:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.004240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.004301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.004313 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.004341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.004374 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.107662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.107714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.107727 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.107746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.107758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.210498 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.210713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.210738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.210772 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.210795 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.314485 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.314524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.314534 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.314556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.314568 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.417651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.417702 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.417722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.417748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.417767 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.521415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.521466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.521477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.521497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.521506 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.624648 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.624738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.624755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.624776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.624789 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.702533 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:55 crc kubenswrapper[4932]: E0321 08:59:55.702848 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.703717 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:55 crc kubenswrapper[4932]: E0321 08:59:55.703866 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.703982 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:55 crc kubenswrapper[4932]: E0321 08:59:55.704762 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.728965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.729036 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.729061 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.729095 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.729120 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.832218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.832270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.832281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.832299 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.832311 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.935023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.935071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.935079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.935099 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:55 crc kubenswrapper[4932]: I0321 08:59:55.935111 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:55Z","lastTransitionTime":"2026-03-21T08:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.037916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.037974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.037987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.038014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.038028 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.141623 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.141687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.141697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.141714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.141723 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.243825 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.243856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.243865 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.243879 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.243887 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.346796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.346889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.346913 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.346940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.346958 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.450389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.450479 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.450543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.450580 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.450606 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.554435 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.554494 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.554504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.554524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.554536 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.659084 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.659145 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.659160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.659183 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.659197 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.762390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.762441 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.762450 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.762471 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.762483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.865894 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.865977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.866005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.866042 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.866067 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.969941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.970044 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.970071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.970109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:56 crc kubenswrapper[4932]: I0321 08:59:56.970135 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:56Z","lastTransitionTime":"2026-03-21T08:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.073906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.073984 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.074000 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.074023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.074037 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.177451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.177518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.177533 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.177556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.177569 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.280588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.280639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.280652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.280674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.280687 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.383705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.383794 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.383812 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.383840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.383857 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.487129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.487193 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.487204 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.487226 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.487239 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.590370 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.590459 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.590475 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.590502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.590515 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.693321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.693388 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.693398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.693428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.693438 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.701992 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:57 crc kubenswrapper[4932]: E0321 08:59:57.702140 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.701994 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.702233 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:57 crc kubenswrapper[4932]: E0321 08:59:57.702607 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:57 crc kubenswrapper[4932]: E0321 08:59:57.702741 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:57 crc kubenswrapper[4932]: E0321 08:59:57.708499 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:57 crc kubenswrapper[4932]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 08:59:57 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 08:59:57 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:57 crc kubenswrapper[4932]: source "/env/_master" Mar 21 08:59:57 crc kubenswrapper[4932]: set +o allexport Mar 21 08:59:57 crc kubenswrapper[4932]: fi Mar 21 08:59:57 crc kubenswrapper[4932]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 21 08:59:57 crc kubenswrapper[4932]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 21 08:59:57 crc kubenswrapper[4932]: ho_enable="--enable-hybrid-overlay" Mar 21 08:59:57 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 21 08:59:57 crc kubenswrapper[4932]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 21 08:59:57 crc kubenswrapper[4932]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 21 08:59:57 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 08:59:57 crc kubenswrapper[4932]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 21 08:59:57 crc kubenswrapper[4932]: --webhook-host=127.0.0.1 \ Mar 21 08:59:57 crc kubenswrapper[4932]: --webhook-port=9743 \ Mar 21 08:59:57 crc kubenswrapper[4932]: ${ho_enable} \ Mar 21 08:59:57 crc kubenswrapper[4932]: --enable-interconnect \ Mar 21 08:59:57 crc kubenswrapper[4932]: --disable-approver \ Mar 21 08:59:57 crc kubenswrapper[4932]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 21 08:59:57 crc kubenswrapper[4932]: --wait-for-kubernetes-api=200s \ Mar 21 08:59:57 crc kubenswrapper[4932]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 21 08:59:57 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 08:59:57 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:57 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:57 crc kubenswrapper[4932]: E0321 08:59:57.712210 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 08:59:57 crc kubenswrapper[4932]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 08:59:57 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 08:59:57 crc kubenswrapper[4932]: set -o allexport Mar 21 08:59:57 crc kubenswrapper[4932]: source "/env/_master" Mar 21 08:59:57 crc kubenswrapper[4932]: set +o allexport Mar 21 08:59:57 crc kubenswrapper[4932]: fi Mar 21 08:59:57 crc kubenswrapper[4932]: Mar 21 08:59:57 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 21 08:59:57 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 08:59:57 crc kubenswrapper[4932]: --disable-webhook \ Mar 21 08:59:57 crc kubenswrapper[4932]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 21 08:59:57 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 08:59:57 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 08:59:57 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 08:59:57 crc kubenswrapper[4932]: E0321 08:59:57.713444 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.724636 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.735133 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.748521 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.767115 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.784310 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.795855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.795911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.795924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.795943 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.795960 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.798107 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.811874 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.824569 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.899899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.900001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.900034 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.900071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:57 crc kubenswrapper[4932]: I0321 08:59:57.900093 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:57Z","lastTransitionTime":"2026-03-21T08:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.002466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.002522 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.002538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.002564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.002581 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.105113 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.105183 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.105206 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.105240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.105260 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.208821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.208881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.208898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.208924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.208942 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.311632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.311687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.311696 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.311770 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.311806 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.414917 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.414981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.414997 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.415028 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.415043 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.518520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.518552 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.518562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.518578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.518591 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.621791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.621854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.621876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.621901 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.621919 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.725728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.725796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.725815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.725839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.725859 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.829543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.829595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.829614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.829642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.829660 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.933072 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.933122 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.933137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.933165 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:58 crc kubenswrapper[4932]: I0321 08:59:58.933185 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:58Z","lastTransitionTime":"2026-03-21T08:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.036495 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.036545 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.036555 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.036574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.036587 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.139658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.139715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.139725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.139745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.139758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.243826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.243880 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.243896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.243920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.243940 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.346639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.346697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.346711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.346733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.346748 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.450283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.450420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.450453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.450493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.450518 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.452689 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5wwpb"] Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.453298 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5wwpb" Mar 21 08:59:59 crc kubenswrapper[4932]: W0321 08:59:59.455897 4932 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.455952 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 21 08:59:59 crc kubenswrapper[4932]: W0321 08:59:59.456131 4932 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.456221 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.457880 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.475021 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.489534 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.501884 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.524991 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.534857 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.544832 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.553562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.553638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.553663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.553698 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.553724 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.554980 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.565333 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.575607 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.614939 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f312294e-78f4-44ca-8dee-96797a8b9205-hosts-file\") pod \"node-resolver-5wwpb\" (UID: \"f312294e-78f4-44ca-8dee-96797a8b9205\") " pod="openshift-dns/node-resolver-5wwpb" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.614991 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w6tm\" (UniqueName: \"kubernetes.io/projected/f312294e-78f4-44ca-8dee-96797a8b9205-kube-api-access-7w6tm\") pod \"node-resolver-5wwpb\" (UID: \"f312294e-78f4-44ca-8dee-96797a8b9205\") " pod="openshift-dns/node-resolver-5wwpb" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.657481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.657549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.657574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.657608 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.657704 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.659620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.659687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.659700 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.659728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.659744 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.674943 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.680737 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.680789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.680801 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.680822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.680836 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.696013 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.701454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.701540 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.701500 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.701560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.701620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.701638 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.701640 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.701759 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.701844 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.701941 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.702070 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.714105 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.717114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f312294e-78f4-44ca-8dee-96797a8b9205-hosts-file\") pod \"node-resolver-5wwpb\" (UID: \"f312294e-78f4-44ca-8dee-96797a8b9205\") " pod="openshift-dns/node-resolver-5wwpb" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.717116 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f312294e-78f4-44ca-8dee-96797a8b9205-hosts-file\") pod \"node-resolver-5wwpb\" (UID: \"f312294e-78f4-44ca-8dee-96797a8b9205\") " pod="openshift-dns/node-resolver-5wwpb" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.717623 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w6tm\" (UniqueName: \"kubernetes.io/projected/f312294e-78f4-44ca-8dee-96797a8b9205-kube-api-access-7w6tm\") pod \"node-resolver-5wwpb\" (UID: \"f312294e-78f4-44ca-8dee-96797a8b9205\") " pod="openshift-dns/node-resolver-5wwpb" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.721661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.721720 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.721744 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.721778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.721803 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.737031 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.742073 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.742131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.742156 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.742193 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.742216 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.758770 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: E0321 08:59:59.759007 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.761484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.761532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.761593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.761622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.761645 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.857163 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2zqsw"] Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.858118 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.860210 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jmd8j"] Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.860459 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-m4n7b"] Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.860672 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-r8kxd"] Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.861413 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.861801 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jmd8j" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.862067 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.863813 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.864114 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.873282 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.874005 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.874326 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.874659 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.874842 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.875277 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.875476 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.875632 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.875839 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.876005 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.876181 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.879848 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.880262 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.880470 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.882127 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.882563 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.885583 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.886997 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.887044 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.887064 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.887088 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.887104 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.897768 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.912314 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.930911 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.940987 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.951291 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.965125 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.973058 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.989798 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.989838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.989924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.989949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.989960 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T08:59:59Z","lastTransitionTime":"2026-03-21T08:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 08:59:59 crc kubenswrapper[4932]: I0321 08:59:59.990049 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.004990 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.018050 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020589 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-os-release\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020652 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-hostroot\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020692 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/215b5025-0486-4911-bfbf-25b367a897df-cni-binary-copy\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020728 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a038ce15-d375-452d-b38f-6893df65dee4-cni-binary-copy\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020767 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-socket-dir-parent\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020798 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-cni-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020815 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-cni-multus\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020856 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-var-lib-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020878 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-env-overrides\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020899 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58z89\" (UniqueName: \"kubernetes.io/projected/a038ce15-d375-452d-b38f-6893df65dee4-kube-api-access-58z89\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020956 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-system-cni-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.020987 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-node-log\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021026 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dtpn\" (UniqueName: \"kubernetes.io/projected/96df7c54-2644-44b4-bcd7-13b82db2ea5d-kube-api-access-8dtpn\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021047 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-netd\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021065 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-kubelet\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021103 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-netns\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-kubelet\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021170 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8044dc63-0327-41d4-93fe-af2287271a84-proxy-tls\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021186 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-conf-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021223 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a038ce15-d375-452d-b38f-6893df65dee4-multus-daemon-config\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-system-cni-dir\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021366 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-os-release\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021425 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovn-node-metrics-cert\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021458 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-script-lib\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021476 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-cnibin\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021495 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021561 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021581 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8044dc63-0327-41d4-93fe-af2287271a84-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021596 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-etc-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021649 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-multus-certs\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021668 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-cnibin\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021714 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/215b5025-0486-4911-bfbf-25b367a897df-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-cni-bin\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021788 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021810 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chfd4\" (UniqueName: \"kubernetes.io/projected/8044dc63-0327-41d4-93fe-af2287271a84-kube-api-access-chfd4\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021857 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-slash\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021870 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-bin\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021912 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb7cr\" (UniqueName: \"kubernetes.io/projected/215b5025-0486-4911-bfbf-25b367a897df-kube-api-access-wb7cr\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021946 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-systemd\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.021985 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-ovn\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.022004 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.022020 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-netns\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.022034 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-config\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.022050 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-etc-kubernetes\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.022245 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8044dc63-0327-41d4-93fe-af2287271a84-rootfs\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.022336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-systemd-units\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.022391 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-log-socket\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.022424 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-k8s-cni-cncf-io\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.030552 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.042422 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.051031 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.063047 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.073514 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.083193 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.092229 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.092281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.092293 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.092314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.092327 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.093296 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.102050 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.115930 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.124971 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-netns\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125040 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-kubelet\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125091 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8044dc63-0327-41d4-93fe-af2287271a84-proxy-tls\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125124 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-kubelet\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125208 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a038ce15-d375-452d-b38f-6893df65dee4-multus-daemon-config\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125239 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-kubelet\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125250 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-system-cni-dir\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125292 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-system-cni-dir\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125329 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-os-release\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125381 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-kubelet\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125411 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovn-node-metrics-cert\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125470 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-os-release\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125518 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-script-lib\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125547 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-conf-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125576 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125601 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-cnibin\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125658 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8044dc63-0327-41d4-93fe-af2287271a84-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125680 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-etc-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125703 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-multus-certs\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125697 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125727 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-cnibin\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125764 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/215b5025-0486-4911-bfbf-25b367a897df-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-conf-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125827 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125795 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125867 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-etc-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125155 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-netns\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125887 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-cni-bin\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125909 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-multus-certs\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125939 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-slash\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125956 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-cnibin\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125915 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-slash\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.125978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126006 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-bin\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126024 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-cni-bin\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126040 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb7cr\" (UniqueName: \"kubernetes.io/projected/215b5025-0486-4911-bfbf-25b367a897df-kube-api-access-wb7cr\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126076 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chfd4\" (UniqueName: \"kubernetes.io/projected/8044dc63-0327-41d4-93fe-af2287271a84-kube-api-access-chfd4\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126103 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-systemd\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126112 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-bin\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126128 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-ovn\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126166 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-netns\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126192 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-config\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126217 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-etc-kubernetes\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126299 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-systemd-units\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126327 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-log-socket\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-k8s-cni-cncf-io\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126431 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8044dc63-0327-41d4-93fe-af2287271a84-rootfs\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126478 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-hostroot\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126505 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/215b5025-0486-4911-bfbf-25b367a897df-cni-binary-copy\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126553 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-os-release\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126563 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8044dc63-0327-41d4-93fe-af2287271a84-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126585 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-socket-dir-parent\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126604 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a038ce15-d375-452d-b38f-6893df65dee4-multus-daemon-config\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126632 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-cni-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126656 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-systemd\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126672 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a038ce15-d375-452d-b38f-6893df65dee4-cni-binary-copy\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126624 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-systemd-units\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126696 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-run-k8s-cni-cncf-io\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126706 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-ovn\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126708 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-var-lib-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126775 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-log-socket\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126792 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-socket-dir-parent\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126803 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-script-lib\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126814 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-os-release\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126826 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-env-overrides\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-hostroot\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126626 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-cnibin\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126863 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-cni-multus\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126874 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/215b5025-0486-4911-bfbf-25b367a897df-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126877 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-etc-kubernetes\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126893 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58z89\" (UniqueName: \"kubernetes.io/projected/a038ce15-d375-452d-b38f-6893df65dee4-kube-api-access-58z89\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.126796 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8044dc63-0327-41d4-93fe-af2287271a84-rootfs\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127022 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-var-lib-openvswitch\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-node-log\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127113 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-netns\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127119 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-multus-cni-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127156 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-host-var-lib-cni-multus\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127266 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dtpn\" (UniqueName: \"kubernetes.io/projected/96df7c54-2644-44b4-bcd7-13b82db2ea5d-kube-api-access-8dtpn\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127322 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-node-log\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127400 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-system-cni-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127460 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a038ce15-d375-452d-b38f-6893df65dee4-cni-binary-copy\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127291 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a038ce15-d375-452d-b38f-6893df65dee4-system-cni-dir\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-env-overrides\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127572 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-netd\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127601 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-netd\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127720 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/215b5025-0486-4911-bfbf-25b367a897df-cni-binary-copy\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.127761 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-config\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.128005 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/215b5025-0486-4911-bfbf-25b367a897df-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.129746 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.130977 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovn-node-metrics-cert\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.134362 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8044dc63-0327-41d4-93fe-af2287271a84-proxy-tls\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.146784 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb7cr\" (UniqueName: \"kubernetes.io/projected/215b5025-0486-4911-bfbf-25b367a897df-kube-api-access-wb7cr\") pod \"multus-additional-cni-plugins-r8kxd\" (UID: \"215b5025-0486-4911-bfbf-25b367a897df\") " pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.147413 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chfd4\" (UniqueName: \"kubernetes.io/projected/8044dc63-0327-41d4-93fe-af2287271a84-kube-api-access-chfd4\") pod \"machine-config-daemon-m4n7b\" (UID: \"8044dc63-0327-41d4-93fe-af2287271a84\") " pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.147572 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58z89\" (UniqueName: \"kubernetes.io/projected/a038ce15-d375-452d-b38f-6893df65dee4-kube-api-access-58z89\") pod \"multus-jmd8j\" (UID: \"a038ce15-d375-452d-b38f-6893df65dee4\") " pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.149641 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dtpn\" (UniqueName: \"kubernetes.io/projected/96df7c54-2644-44b4-bcd7-13b82db2ea5d-kube-api-access-8dtpn\") pod \"ovnkube-node-2zqsw\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.154027 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.167758 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.180317 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.194796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.194877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.194896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.194916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.194929 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.198090 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.208416 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.219202 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jmd8j" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.224769 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.230040 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:00 crc kubenswrapper[4932]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 21 09:00:00 crc kubenswrapper[4932]: apiVersion: v1 Mar 21 09:00:00 crc kubenswrapper[4932]: clusters: Mar 21 09:00:00 crc kubenswrapper[4932]: - cluster: Mar 21 09:00:00 crc kubenswrapper[4932]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 21 09:00:00 crc kubenswrapper[4932]: server: https://api-int.crc.testing:6443 Mar 21 09:00:00 crc kubenswrapper[4932]: name: default-cluster Mar 21 09:00:00 crc kubenswrapper[4932]: contexts: Mar 21 09:00:00 crc kubenswrapper[4932]: - context: Mar 21 09:00:00 crc kubenswrapper[4932]: cluster: default-cluster Mar 21 09:00:00 crc kubenswrapper[4932]: namespace: default Mar 21 09:00:00 crc kubenswrapper[4932]: user: default-auth Mar 21 09:00:00 crc kubenswrapper[4932]: name: default-context Mar 21 09:00:00 crc kubenswrapper[4932]: current-context: default-context Mar 21 09:00:00 crc kubenswrapper[4932]: kind: Config Mar 21 09:00:00 crc kubenswrapper[4932]: preferences: {} Mar 21 09:00:00 crc kubenswrapper[4932]: users: Mar 21 09:00:00 crc kubenswrapper[4932]: - name: default-auth Mar 21 09:00:00 crc kubenswrapper[4932]: user: Mar 21 09:00:00 crc kubenswrapper[4932]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 21 09:00:00 crc kubenswrapper[4932]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 21 09:00:00 crc kubenswrapper[4932]: EOF Mar 21 09:00:00 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dtpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:00 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.231382 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.241470 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:00 crc kubenswrapper[4932]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 21 09:00:00 crc kubenswrapper[4932]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 21 09:00:00 crc kubenswrapper[4932]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58z89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-jmd8j_openshift-multus(a038ce15-d375-452d-b38f-6893df65dee4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:00 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.242705 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-jmd8j" podUID="a038ce15-d375-452d-b38f-6893df65dee4" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.243145 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wb7cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-r8kxd_openshift-multus(215b5025-0486-4911-bfbf-25b367a897df): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.244260 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" podUID="215b5025-0486-4911-bfbf-25b367a897df" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.244760 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chfd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.247273 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chfd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.248474 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.298187 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.298747 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.298889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.299041 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.299167 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.401675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.401724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.401734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.401754 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.401767 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.505054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.505104 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.505119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.505142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.505158 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.607730 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.607779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.607790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.607809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.607820 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.619832 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.625062 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w6tm\" (UniqueName: \"kubernetes.io/projected/f312294e-78f4-44ca-8dee-96797a8b9205-kube-api-access-7w6tm\") pod \"node-resolver-5wwpb\" (UID: \"f312294e-78f4-44ca-8dee-96797a8b9205\") " pod="openshift-dns/node-resolver-5wwpb" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.711067 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.711137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.711164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.711195 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.711213 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.768584 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.773917 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5wwpb" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.790294 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:00 crc kubenswrapper[4932]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 21 09:00:00 crc kubenswrapper[4932]: set -uo pipefail Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 21 09:00:00 crc kubenswrapper[4932]: HOSTS_FILE="/etc/hosts" Mar 21 09:00:00 crc kubenswrapper[4932]: TEMP_FILE="/etc/hosts.tmp" Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: # Make a temporary file with the old hosts file's attributes. Mar 21 09:00:00 crc kubenswrapper[4932]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 21 09:00:00 crc kubenswrapper[4932]: echo "Failed to preserve hosts file. Exiting." Mar 21 09:00:00 crc kubenswrapper[4932]: exit 1 Mar 21 09:00:00 crc kubenswrapper[4932]: fi Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: while true; do Mar 21 09:00:00 crc kubenswrapper[4932]: declare -A svc_ips Mar 21 09:00:00 crc kubenswrapper[4932]: for svc in "${services[@]}"; do Mar 21 09:00:00 crc kubenswrapper[4932]: # Fetch service IP from cluster dns if present. We make several tries Mar 21 09:00:00 crc kubenswrapper[4932]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 21 09:00:00 crc kubenswrapper[4932]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 21 09:00:00 crc kubenswrapper[4932]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 21 09:00:00 crc kubenswrapper[4932]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:00 crc kubenswrapper[4932]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:00 crc kubenswrapper[4932]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:00 crc kubenswrapper[4932]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 21 09:00:00 crc kubenswrapper[4932]: for i in ${!cmds[*]} Mar 21 09:00:00 crc kubenswrapper[4932]: do Mar 21 09:00:00 crc kubenswrapper[4932]: ips=($(eval "${cmds[i]}")) Mar 21 09:00:00 crc kubenswrapper[4932]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 21 09:00:00 crc kubenswrapper[4932]: svc_ips["${svc}"]="${ips[@]}" Mar 21 09:00:00 crc kubenswrapper[4932]: break Mar 21 09:00:00 crc kubenswrapper[4932]: fi Mar 21 09:00:00 crc kubenswrapper[4932]: done Mar 21 09:00:00 crc kubenswrapper[4932]: done Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: # Update /etc/hosts only if we get valid service IPs Mar 21 09:00:00 crc kubenswrapper[4932]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 21 09:00:00 crc kubenswrapper[4932]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 21 09:00:00 crc kubenswrapper[4932]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 21 09:00:00 crc kubenswrapper[4932]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 21 09:00:00 crc kubenswrapper[4932]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 21 09:00:00 crc kubenswrapper[4932]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 21 09:00:00 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:00 crc kubenswrapper[4932]: continue Mar 21 09:00:00 crc kubenswrapper[4932]: fi Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: # Append resolver entries for services Mar 21 09:00:00 crc kubenswrapper[4932]: rc=0 Mar 21 09:00:00 crc kubenswrapper[4932]: for svc in "${!svc_ips[@]}"; do Mar 21 09:00:00 crc kubenswrapper[4932]: for ip in ${svc_ips[${svc}]}; do Mar 21 09:00:00 crc kubenswrapper[4932]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 21 09:00:00 crc kubenswrapper[4932]: done Mar 21 09:00:00 crc kubenswrapper[4932]: done Mar 21 09:00:00 crc kubenswrapper[4932]: if [[ $rc -ne 0 ]]; then Mar 21 09:00:00 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:00 crc kubenswrapper[4932]: continue Mar 21 09:00:00 crc kubenswrapper[4932]: fi Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: Mar 21 09:00:00 crc kubenswrapper[4932]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 21 09:00:00 crc kubenswrapper[4932]: # Replace /etc/hosts with our modified version if needed Mar 21 09:00:00 crc kubenswrapper[4932]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 21 09:00:00 crc kubenswrapper[4932]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 21 09:00:00 crc kubenswrapper[4932]: fi Mar 21 09:00:00 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:00 crc kubenswrapper[4932]: unset svc_ips Mar 21 09:00:00 crc kubenswrapper[4932]: done Mar 21 09:00:00 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7w6tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-5wwpb_openshift-dns(f312294e-78f4-44ca-8dee-96797a8b9205): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:00 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:00 crc kubenswrapper[4932]: E0321 09:00:00.791596 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-5wwpb" podUID="f312294e-78f4-44ca-8dee-96797a8b9205" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.814903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.814977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.814993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.815017 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.815059 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.918876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.918985 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.919030 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.919059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:00 crc kubenswrapper[4932]: I0321 09:00:00.919075 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:00Z","lastTransitionTime":"2026-03-21T09:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.021791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.021839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.021848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.021871 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.021883 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.125120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.125200 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.125221 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.125253 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.125272 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.162847 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"847b43ed64f507d07f3bd50641cf592661d92a27233adf43da7444c1388a4b9d"} Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.165771 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:01 crc kubenswrapper[4932]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 21 09:00:01 crc kubenswrapper[4932]: apiVersion: v1 Mar 21 09:00:01 crc kubenswrapper[4932]: clusters: Mar 21 09:00:01 crc kubenswrapper[4932]: - cluster: Mar 21 09:00:01 crc kubenswrapper[4932]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 21 09:00:01 crc kubenswrapper[4932]: server: https://api-int.crc.testing:6443 Mar 21 09:00:01 crc kubenswrapper[4932]: name: default-cluster Mar 21 09:00:01 crc kubenswrapper[4932]: contexts: Mar 21 09:00:01 crc kubenswrapper[4932]: - context: Mar 21 09:00:01 crc kubenswrapper[4932]: cluster: default-cluster Mar 21 09:00:01 crc kubenswrapper[4932]: namespace: default Mar 21 09:00:01 crc kubenswrapper[4932]: user: default-auth Mar 21 09:00:01 crc kubenswrapper[4932]: name: default-context Mar 21 09:00:01 crc kubenswrapper[4932]: current-context: default-context Mar 21 09:00:01 crc kubenswrapper[4932]: kind: Config Mar 21 09:00:01 crc kubenswrapper[4932]: preferences: {} Mar 21 09:00:01 crc kubenswrapper[4932]: users: Mar 21 09:00:01 crc kubenswrapper[4932]: - name: default-auth Mar 21 09:00:01 crc kubenswrapper[4932]: user: Mar 21 09:00:01 crc kubenswrapper[4932]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 21 09:00:01 crc kubenswrapper[4932]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 21 09:00:01 crc kubenswrapper[4932]: EOF Mar 21 09:00:01 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dtpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:01 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.165911 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerStarted","Data":"9527033eb015856cd16d3b546f907bf067cbd4ae8383edc4b972d139731448e7"} Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.167015 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.168409 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"3c64b79adb69d614d349b747a4b46b95570134f9650d5839ba88063310f18cfc"} Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.168655 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wb7cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-r8kxd_openshift-multus(215b5025-0486-4911-bfbf-25b367a897df): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.169907 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" podUID="215b5025-0486-4911-bfbf-25b367a897df" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.170400 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chfd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.171081 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5wwpb" event={"ID":"f312294e-78f4-44ca-8dee-96797a8b9205","Type":"ContainerStarted","Data":"9ab267aeabb1217993bb2e765e1576357b6696e20004f060af4a83b8087608a4"} Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.172467 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:01 crc kubenswrapper[4932]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 21 09:00:01 crc kubenswrapper[4932]: set -uo pipefail Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 21 09:00:01 crc kubenswrapper[4932]: HOSTS_FILE="/etc/hosts" Mar 21 09:00:01 crc kubenswrapper[4932]: TEMP_FILE="/etc/hosts.tmp" Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: # Make a temporary file with the old hosts file's attributes. Mar 21 09:00:01 crc kubenswrapper[4932]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 21 09:00:01 crc kubenswrapper[4932]: echo "Failed to preserve hosts file. Exiting." Mar 21 09:00:01 crc kubenswrapper[4932]: exit 1 Mar 21 09:00:01 crc kubenswrapper[4932]: fi Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: while true; do Mar 21 09:00:01 crc kubenswrapper[4932]: declare -A svc_ips Mar 21 09:00:01 crc kubenswrapper[4932]: for svc in "${services[@]}"; do Mar 21 09:00:01 crc kubenswrapper[4932]: # Fetch service IP from cluster dns if present. We make several tries Mar 21 09:00:01 crc kubenswrapper[4932]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 21 09:00:01 crc kubenswrapper[4932]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 21 09:00:01 crc kubenswrapper[4932]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 21 09:00:01 crc kubenswrapper[4932]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:01 crc kubenswrapper[4932]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:01 crc kubenswrapper[4932]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:01 crc kubenswrapper[4932]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 21 09:00:01 crc kubenswrapper[4932]: for i in ${!cmds[*]} Mar 21 09:00:01 crc kubenswrapper[4932]: do Mar 21 09:00:01 crc kubenswrapper[4932]: ips=($(eval "${cmds[i]}")) Mar 21 09:00:01 crc kubenswrapper[4932]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 21 09:00:01 crc kubenswrapper[4932]: svc_ips["${svc}"]="${ips[@]}" Mar 21 09:00:01 crc kubenswrapper[4932]: break Mar 21 09:00:01 crc kubenswrapper[4932]: fi Mar 21 09:00:01 crc kubenswrapper[4932]: done Mar 21 09:00:01 crc kubenswrapper[4932]: done Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: # Update /etc/hosts only if we get valid service IPs Mar 21 09:00:01 crc kubenswrapper[4932]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 21 09:00:01 crc kubenswrapper[4932]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 21 09:00:01 crc kubenswrapper[4932]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 21 09:00:01 crc kubenswrapper[4932]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 21 09:00:01 crc kubenswrapper[4932]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 21 09:00:01 crc kubenswrapper[4932]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 21 09:00:01 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:01 crc kubenswrapper[4932]: continue Mar 21 09:00:01 crc kubenswrapper[4932]: fi Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: # Append resolver entries for services Mar 21 09:00:01 crc kubenswrapper[4932]: rc=0 Mar 21 09:00:01 crc kubenswrapper[4932]: for svc in "${!svc_ips[@]}"; do Mar 21 09:00:01 crc kubenswrapper[4932]: for ip in ${svc_ips[${svc}]}; do Mar 21 09:00:01 crc kubenswrapper[4932]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 21 09:00:01 crc kubenswrapper[4932]: done Mar 21 09:00:01 crc kubenswrapper[4932]: done Mar 21 09:00:01 crc kubenswrapper[4932]: if [[ $rc -ne 0 ]]; then Mar 21 09:00:01 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:01 crc kubenswrapper[4932]: continue Mar 21 09:00:01 crc kubenswrapper[4932]: fi Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: Mar 21 09:00:01 crc kubenswrapper[4932]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 21 09:00:01 crc kubenswrapper[4932]: # Replace /etc/hosts with our modified version if needed Mar 21 09:00:01 crc kubenswrapper[4932]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 21 09:00:01 crc kubenswrapper[4932]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 21 09:00:01 crc kubenswrapper[4932]: fi Mar 21 09:00:01 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:01 crc kubenswrapper[4932]: unset svc_ips Mar 21 09:00:01 crc kubenswrapper[4932]: done Mar 21 09:00:01 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7w6tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-5wwpb_openshift-dns(f312294e-78f4-44ca-8dee-96797a8b9205): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:01 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.172913 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jmd8j" event={"ID":"a038ce15-d375-452d-b38f-6893df65dee4","Type":"ContainerStarted","Data":"1fd310e7e02d1b45c05f878f40e572dcda31778a90c1c7a9b3cf7b4e7c15b7d7"} Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.173194 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chfd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.173944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-5wwpb" podUID="f312294e-78f4-44ca-8dee-96797a8b9205" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.175281 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.175452 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:01 crc kubenswrapper[4932]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 21 09:00:01 crc kubenswrapper[4932]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 21 09:00:01 crc kubenswrapper[4932]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58z89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-jmd8j_openshift-multus(a038ce15-d375-452d-b38f-6893df65dee4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:01 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.175661 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.176749 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-jmd8j" podUID="a038ce15-d375-452d-b38f-6893df65dee4" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.185332 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.194992 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.206428 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.217627 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.226950 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.228190 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.228233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.228246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.228272 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.228285 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.237762 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.249776 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.264101 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.282756 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.299897 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.308905 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.316200 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-svc74"] Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.316908 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.320736 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.320948 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.321258 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.321708 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.323138 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.330584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.330640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.330658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.330683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.330695 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.334367 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.354514 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.368609 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.380136 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.392006 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.402913 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.411718 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.422320 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.429564 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.433501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.433552 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.433563 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.433582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.433596 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.440211 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-host\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.440256 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8bgh\" (UniqueName: \"kubernetes.io/projected/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-kube-api-access-h8bgh\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.440279 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-serviceca\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.440797 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.451967 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.462421 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.473743 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.482925 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.536393 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.536448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.536462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.536482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.536502 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.541093 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8bgh\" (UniqueName: \"kubernetes.io/projected/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-kube-api-access-h8bgh\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.541140 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-serviceca\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.541199 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-host\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.541249 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-host\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.542932 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-serviceca\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.563153 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8bgh\" (UniqueName: \"kubernetes.io/projected/a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb-kube-api-access-h8bgh\") pod \"node-ca-svc74\" (UID: \"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\") " pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.631575 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-svc74" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.639187 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.639274 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.639289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.639312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.639324 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.651086 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:01 crc kubenswrapper[4932]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 21 09:00:01 crc kubenswrapper[4932]: while [ true ]; Mar 21 09:00:01 crc kubenswrapper[4932]: do Mar 21 09:00:01 crc kubenswrapper[4932]: for f in $(ls /tmp/serviceca); do Mar 21 09:00:01 crc kubenswrapper[4932]: echo $f Mar 21 09:00:01 crc kubenswrapper[4932]: ca_file_path="/tmp/serviceca/${f}" Mar 21 09:00:01 crc kubenswrapper[4932]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 21 09:00:01 crc kubenswrapper[4932]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 21 09:00:01 crc kubenswrapper[4932]: if [ -e "${reg_dir_path}" ]; then Mar 21 09:00:01 crc kubenswrapper[4932]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 21 09:00:01 crc kubenswrapper[4932]: else Mar 21 09:00:01 crc kubenswrapper[4932]: mkdir $reg_dir_path Mar 21 09:00:01 crc kubenswrapper[4932]: cp $ca_file_path $reg_dir_path/ca.crt Mar 21 09:00:01 crc kubenswrapper[4932]: fi Mar 21 09:00:01 crc kubenswrapper[4932]: done Mar 21 09:00:01 crc kubenswrapper[4932]: for d in $(ls /etc/docker/certs.d); do Mar 21 09:00:01 crc kubenswrapper[4932]: echo $d Mar 21 09:00:01 crc kubenswrapper[4932]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 21 09:00:01 crc kubenswrapper[4932]: reg_conf_path="/tmp/serviceca/${dp}" Mar 21 09:00:01 crc kubenswrapper[4932]: if [ ! -e "${reg_conf_path}" ]; then Mar 21 09:00:01 crc kubenswrapper[4932]: rm -rf /etc/docker/certs.d/$d Mar 21 09:00:01 crc kubenswrapper[4932]: fi Mar 21 09:00:01 crc kubenswrapper[4932]: done Mar 21 09:00:01 crc kubenswrapper[4932]: sleep 60 & wait ${!} Mar 21 09:00:01 crc kubenswrapper[4932]: done Mar 21 09:00:01 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8bgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-svc74_openshift-image-registry(a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:01 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.652271 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-svc74" podUID="a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.701699 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.701807 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.701893 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.702016 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.702664 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.702761 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.704233 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.705409 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.705743 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:01 crc kubenswrapper[4932]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 21 09:00:01 crc kubenswrapper[4932]: set -o allexport Mar 21 09:00:01 crc kubenswrapper[4932]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 21 09:00:01 crc kubenswrapper[4932]: source /etc/kubernetes/apiserver-url.env Mar 21 09:00:01 crc kubenswrapper[4932]: else Mar 21 09:00:01 crc kubenswrapper[4932]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 21 09:00:01 crc kubenswrapper[4932]: exit 1 Mar 21 09:00:01 crc kubenswrapper[4932]: fi Mar 21 09:00:01 crc kubenswrapper[4932]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 21 09:00:01 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:01 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:01 crc kubenswrapper[4932]: E0321 09:00:01.707567 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.742480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.742551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.742565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.742586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.742622 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.846014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.846081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.846104 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.846132 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.846151 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.949563 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.949621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.949639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.949667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:01 crc kubenswrapper[4932]: I0321 09:00:01.949684 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:01Z","lastTransitionTime":"2026-03-21T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.053239 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.053310 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.053390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.053434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.053461 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.157051 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.157146 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.157158 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.157182 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.157196 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.176345 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-svc74" event={"ID":"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb","Type":"ContainerStarted","Data":"4d3b2459e9703b4f3caf079bf93c12dcf009d85d24bf5be727d5ab8b8460501d"} Mar 21 09:00:02 crc kubenswrapper[4932]: E0321 09:00:02.178517 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:02 crc kubenswrapper[4932]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 21 09:00:02 crc kubenswrapper[4932]: while [ true ]; Mar 21 09:00:02 crc kubenswrapper[4932]: do Mar 21 09:00:02 crc kubenswrapper[4932]: for f in $(ls /tmp/serviceca); do Mar 21 09:00:02 crc kubenswrapper[4932]: echo $f Mar 21 09:00:02 crc kubenswrapper[4932]: ca_file_path="/tmp/serviceca/${f}" Mar 21 09:00:02 crc kubenswrapper[4932]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 21 09:00:02 crc kubenswrapper[4932]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 21 09:00:02 crc kubenswrapper[4932]: if [ -e "${reg_dir_path}" ]; then Mar 21 09:00:02 crc kubenswrapper[4932]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 21 09:00:02 crc kubenswrapper[4932]: else Mar 21 09:00:02 crc kubenswrapper[4932]: mkdir $reg_dir_path Mar 21 09:00:02 crc kubenswrapper[4932]: cp $ca_file_path $reg_dir_path/ca.crt Mar 21 09:00:02 crc kubenswrapper[4932]: fi Mar 21 09:00:02 crc kubenswrapper[4932]: done Mar 21 09:00:02 crc kubenswrapper[4932]: for d in $(ls /etc/docker/certs.d); do Mar 21 09:00:02 crc kubenswrapper[4932]: echo $d Mar 21 09:00:02 crc kubenswrapper[4932]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 21 09:00:02 crc kubenswrapper[4932]: reg_conf_path="/tmp/serviceca/${dp}" Mar 21 09:00:02 crc kubenswrapper[4932]: if [ ! -e "${reg_conf_path}" ]; then Mar 21 09:00:02 crc kubenswrapper[4932]: rm -rf /etc/docker/certs.d/$d Mar 21 09:00:02 crc kubenswrapper[4932]: fi Mar 21 09:00:02 crc kubenswrapper[4932]: done Mar 21 09:00:02 crc kubenswrapper[4932]: sleep 60 & wait ${!} Mar 21 09:00:02 crc kubenswrapper[4932]: done Mar 21 09:00:02 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8bgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-svc74_openshift-image-registry(a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:02 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:02 crc kubenswrapper[4932]: E0321 09:00:02.179764 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-svc74" podUID="a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.198073 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.211266 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.220265 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.234548 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.246688 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.258405 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.260040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.260092 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.260112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.260140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.260158 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.277208 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.290476 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.302747 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.313835 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.332801 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.348498 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.363197 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.363642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.363768 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.363853 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.363958 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.367148 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.376244 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.467058 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.467466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.467607 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.467729 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.467812 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.570840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.570936 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.570945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.570962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.570972 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.675442 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.675919 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.676099 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.676663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.676814 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.703056 4932 scope.go:117] "RemoveContainer" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.780782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.780851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.780877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.780911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.780938 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.883257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.883299 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.883307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.883324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.883334 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.986841 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.986895 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.986909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.986926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:02 crc kubenswrapper[4932]: I0321 09:00:02.986937 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:02Z","lastTransitionTime":"2026-03-21T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.090564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.090613 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.090624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.090645 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.090656 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.183308 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.186125 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.186566 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.193713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.193758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.193767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.193784 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.193796 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.199122 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.212616 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.223915 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.236071 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.247951 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.266290 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.277900 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.286651 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.295497 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.296792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.296846 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.296858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.296878 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.296890 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.306247 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.316669 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.327958 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.335051 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.344929 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.399239 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.399282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.399292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.399312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.399324 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.502652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.502720 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.502729 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.502746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.502757 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.605295 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.605385 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.605400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.605432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.605448 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.702387 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.702450 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.702420 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:03 crc kubenswrapper[4932]: E0321 09:00:03.702588 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:03 crc kubenswrapper[4932]: E0321 09:00:03.702730 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:03 crc kubenswrapper[4932]: E0321 09:00:03.702827 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.707331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.707366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.707377 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.707393 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.707405 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.810212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.810257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.810269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.810288 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.810300 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.914778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.914853 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.914872 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.914899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:03 crc kubenswrapper[4932]: I0321 09:00:03.914928 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:03Z","lastTransitionTime":"2026-03-21T09:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.020176 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.020281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.020306 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.020338 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.020410 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.124320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.124503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.124531 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.124567 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.124591 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.228846 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.228888 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.228902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.228924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.228936 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.331683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.331745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.331760 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.331783 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.331802 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.433838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.433889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.433902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.433923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.433936 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.538591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.538694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.538722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.538763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.538790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.642935 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.643035 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.643060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.643090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.643110 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.747115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.747187 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.747209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.747237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.747259 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.851145 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.851193 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.851207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.851227 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.851240 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.954776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.954851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.954870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.954902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:04 crc kubenswrapper[4932]: I0321 09:00:04.954920 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:04Z","lastTransitionTime":"2026-03-21T09:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.058701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.058766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.058780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.058804 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.058818 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.162466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.162532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.162551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.162582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.162601 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.265628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.265711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.265728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.265755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.265768 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.369549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.369610 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.369624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.369650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.369666 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.473719 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.473806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.473820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.473851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.473879 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.577269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.577337 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.577413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.577462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.577490 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.584024 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.584148 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.584275 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.584339 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:00:37.584298293 +0000 UTC m=+141.179496612 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.584417 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.584468 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.584496 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 09:00:37.584482369 +0000 UTC m=+141.179680648 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.584550 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 09:00:37.5845282 +0000 UTC m=+141.179726509 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.681178 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.681243 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.681257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.681278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.681317 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.685075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.685216 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.685339 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.685393 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.685408 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.685481 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 09:00:37.685454468 +0000 UTC m=+141.280652907 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.685533 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.685566 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.685589 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.685677 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 09:00:37.685648084 +0000 UTC m=+141.280846393 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.702316 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.702431 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.702503 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.702533 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.702711 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:05 crc kubenswrapper[4932]: E0321 09:00:05.702857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.784060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.784116 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.784134 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.784159 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.784175 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.887301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.887379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.887396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.887418 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.887432 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.990273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.990324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.990333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.990365 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:05 crc kubenswrapper[4932]: I0321 09:00:05.990401 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:05Z","lastTransitionTime":"2026-03-21T09:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.093763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.093830 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.093839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.093865 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.093879 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.197001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.197083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.197094 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.197134 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.197151 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.300900 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.300946 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.300956 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.300974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.300989 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.404223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.404307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.404324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.404374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.404394 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.508059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.508109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.508121 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.508140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.508151 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.611320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.611411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.611424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.611447 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.611462 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.714796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.714861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.714872 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.714894 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.714905 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.817688 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.817750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.817763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.817785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.817798 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.921182 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.921225 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.921239 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.921258 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:06 crc kubenswrapper[4932]: I0321 09:00:06.921271 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:06Z","lastTransitionTime":"2026-03-21T09:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.024548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.024617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.024629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.024651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.024665 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.128103 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.128155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.128164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.128183 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.128230 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.231192 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.231269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.231287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.231328 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.231383 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.334967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.335019 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.335030 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.335049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.335061 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.440015 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.440084 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.440097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.440122 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.440136 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.544204 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.544265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.544279 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.544303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.544322 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.648095 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.648162 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.648177 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.648200 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.648217 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.701632 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.701631 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.701822 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:07 crc kubenswrapper[4932]: E0321 09:00:07.702019 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:07 crc kubenswrapper[4932]: E0321 09:00:07.702063 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:07 crc kubenswrapper[4932]: E0321 09:00:07.702127 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.720626 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.733528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.751078 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.751136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.751149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.751171 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.751185 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.753167 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.766448 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.786437 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.796107 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.811228 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.833910 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.854683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.854744 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.854757 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.854780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.854794 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.855722 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.872802 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.893898 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.906915 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.936138 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.950861 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.958158 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.958208 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.958222 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.958242 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:07 crc kubenswrapper[4932]: I0321 09:00:07.958258 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:07Z","lastTransitionTime":"2026-03-21T09:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.061587 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.061647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.061666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.061696 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.061716 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.164707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.164777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.164791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.164812 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.164835 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.267782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.267886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.267907 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.268001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.268108 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.372889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.372963 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.372981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.373010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.373030 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.476596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.476651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.476669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.476704 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.476740 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.579856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.579931 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.579954 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.579982 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.580004 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.683448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.683525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.683545 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.683575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.683597 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: E0321 09:00:08.705243 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:08 crc kubenswrapper[4932]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 09:00:08 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 09:00:08 crc kubenswrapper[4932]: set -o allexport Mar 21 09:00:08 crc kubenswrapper[4932]: source "/env/_master" Mar 21 09:00:08 crc kubenswrapper[4932]: set +o allexport Mar 21 09:00:08 crc kubenswrapper[4932]: fi Mar 21 09:00:08 crc kubenswrapper[4932]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 21 09:00:08 crc kubenswrapper[4932]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 21 09:00:08 crc kubenswrapper[4932]: ho_enable="--enable-hybrid-overlay" Mar 21 09:00:08 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 21 09:00:08 crc kubenswrapper[4932]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 21 09:00:08 crc kubenswrapper[4932]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 21 09:00:08 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 09:00:08 crc kubenswrapper[4932]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 21 09:00:08 crc kubenswrapper[4932]: --webhook-host=127.0.0.1 \ Mar 21 09:00:08 crc kubenswrapper[4932]: --webhook-port=9743 \ Mar 21 09:00:08 crc kubenswrapper[4932]: ${ho_enable} \ Mar 21 09:00:08 crc kubenswrapper[4932]: --enable-interconnect \ Mar 21 09:00:08 crc kubenswrapper[4932]: --disable-approver \ Mar 21 09:00:08 crc kubenswrapper[4932]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 21 09:00:08 crc kubenswrapper[4932]: --wait-for-kubernetes-api=200s \ Mar 21 09:00:08 crc kubenswrapper[4932]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 21 09:00:08 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 09:00:08 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:08 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:08 crc kubenswrapper[4932]: E0321 09:00:08.710023 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:08 crc kubenswrapper[4932]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 09:00:08 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 09:00:08 crc kubenswrapper[4932]: set -o allexport Mar 21 09:00:08 crc kubenswrapper[4932]: source "/env/_master" Mar 21 09:00:08 crc kubenswrapper[4932]: set +o allexport Mar 21 09:00:08 crc kubenswrapper[4932]: fi Mar 21 09:00:08 crc kubenswrapper[4932]: Mar 21 09:00:08 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 21 09:00:08 crc kubenswrapper[4932]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 21 09:00:08 crc kubenswrapper[4932]: --disable-webhook \ Mar 21 09:00:08 crc kubenswrapper[4932]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 21 09:00:08 crc kubenswrapper[4932]: --loglevel="${LOGLEVEL}" Mar 21 09:00:08 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:08 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:08 crc kubenswrapper[4932]: E0321 09:00:08.711308 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.788090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.788146 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.788159 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.788181 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.788198 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.891730 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.891821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.891839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.891865 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.891879 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.994940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.994997 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.995007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.995025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:08 crc kubenswrapper[4932]: I0321 09:00:08.995038 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:08Z","lastTransitionTime":"2026-03-21T09:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.098548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.098612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.098626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.098650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.098667 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.202313 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.202387 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.202399 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.202421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.202434 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.306631 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.306729 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.306755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.306791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.306815 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.410122 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.410169 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.410181 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.410205 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.410221 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.513274 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.513403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.513431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.513469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.513495 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.617173 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.617238 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.617250 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.617272 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.617285 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.701503 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.701556 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.701715 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:09 crc kubenswrapper[4932]: E0321 09:00:09.701891 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:09 crc kubenswrapper[4932]: E0321 09:00:09.702096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:09 crc kubenswrapper[4932]: E0321 09:00:09.702762 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.720133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.720251 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.720289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.720327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.720455 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.728982 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.823490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.823549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.823567 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.823595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.823613 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.904235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.904313 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.904337 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.904409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.904428 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: E0321 09:00:09.922170 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.932714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.932838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.932870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.932927 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.933088 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: E0321 09:00:09.952930 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.958917 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.958976 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.958996 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.959020 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.959042 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:09 crc kubenswrapper[4932]: E0321 09:00:09.977702 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.984151 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.984225 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.984247 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.984275 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:09 crc kubenswrapper[4932]: I0321 09:00:09.984298 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:09Z","lastTransitionTime":"2026-03-21T09:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: E0321 09:00:10.000938 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.006849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.006922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.006942 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.006974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.006994 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: E0321 09:00:10.020893 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:10 crc kubenswrapper[4932]: E0321 09:00:10.021062 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.023443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.023499 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.023518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.023550 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.023571 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.126510 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.126570 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.126590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.126620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.126641 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.229851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.230303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.230406 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.230510 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.230698 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.333606 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.333651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.333665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.333685 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.333701 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.436642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.436701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.436712 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.436733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.436747 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.540035 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.540618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.540640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.540668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.540686 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.644571 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.644641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.644660 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.644685 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.644706 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.748553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.748621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.748643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.748675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.748697 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.853315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.853421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.853447 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.853478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.853500 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.956549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.956655 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.956689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.956725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:10 crc kubenswrapper[4932]: I0321 09:00:10.956751 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:10Z","lastTransitionTime":"2026-03-21T09:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.060418 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.060473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.060484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.060508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.060521 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.163868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.163959 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.163984 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.164021 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.164041 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.268129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.268561 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.268654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.268775 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.268863 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.372732 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.372835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.372856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.372883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.372900 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.476161 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.476252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.476275 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.476303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.476321 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.579418 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.579490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.579508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.579538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.579557 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.682312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.682408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.682430 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.682800 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.682843 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.702476 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.702818 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.702922 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:11 crc kubenswrapper[4932]: E0321 09:00:11.703287 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:11 crc kubenswrapper[4932]: E0321 09:00:11.703522 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:11 crc kubenswrapper[4932]: E0321 09:00:11.704107 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:11 crc kubenswrapper[4932]: E0321 09:00:11.707267 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:11 crc kubenswrapper[4932]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 21 09:00:11 crc kubenswrapper[4932]: set -uo pipefail Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 21 09:00:11 crc kubenswrapper[4932]: HOSTS_FILE="/etc/hosts" Mar 21 09:00:11 crc kubenswrapper[4932]: TEMP_FILE="/etc/hosts.tmp" Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: # Make a temporary file with the old hosts file's attributes. Mar 21 09:00:11 crc kubenswrapper[4932]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 21 09:00:11 crc kubenswrapper[4932]: echo "Failed to preserve hosts file. Exiting." Mar 21 09:00:11 crc kubenswrapper[4932]: exit 1 Mar 21 09:00:11 crc kubenswrapper[4932]: fi Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: while true; do Mar 21 09:00:11 crc kubenswrapper[4932]: declare -A svc_ips Mar 21 09:00:11 crc kubenswrapper[4932]: for svc in "${services[@]}"; do Mar 21 09:00:11 crc kubenswrapper[4932]: # Fetch service IP from cluster dns if present. We make several tries Mar 21 09:00:11 crc kubenswrapper[4932]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 21 09:00:11 crc kubenswrapper[4932]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 21 09:00:11 crc kubenswrapper[4932]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 21 09:00:11 crc kubenswrapper[4932]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:11 crc kubenswrapper[4932]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:11 crc kubenswrapper[4932]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 21 09:00:11 crc kubenswrapper[4932]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 21 09:00:11 crc kubenswrapper[4932]: for i in ${!cmds[*]} Mar 21 09:00:11 crc kubenswrapper[4932]: do Mar 21 09:00:11 crc kubenswrapper[4932]: ips=($(eval "${cmds[i]}")) Mar 21 09:00:11 crc kubenswrapper[4932]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 21 09:00:11 crc kubenswrapper[4932]: svc_ips["${svc}"]="${ips[@]}" Mar 21 09:00:11 crc kubenswrapper[4932]: break Mar 21 09:00:11 crc kubenswrapper[4932]: fi Mar 21 09:00:11 crc kubenswrapper[4932]: done Mar 21 09:00:11 crc kubenswrapper[4932]: done Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: # Update /etc/hosts only if we get valid service IPs Mar 21 09:00:11 crc kubenswrapper[4932]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 21 09:00:11 crc kubenswrapper[4932]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 21 09:00:11 crc kubenswrapper[4932]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 21 09:00:11 crc kubenswrapper[4932]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 21 09:00:11 crc kubenswrapper[4932]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 21 09:00:11 crc kubenswrapper[4932]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 21 09:00:11 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:11 crc kubenswrapper[4932]: continue Mar 21 09:00:11 crc kubenswrapper[4932]: fi Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: # Append resolver entries for services Mar 21 09:00:11 crc kubenswrapper[4932]: rc=0 Mar 21 09:00:11 crc kubenswrapper[4932]: for svc in "${!svc_ips[@]}"; do Mar 21 09:00:11 crc kubenswrapper[4932]: for ip in ${svc_ips[${svc}]}; do Mar 21 09:00:11 crc kubenswrapper[4932]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 21 09:00:11 crc kubenswrapper[4932]: done Mar 21 09:00:11 crc kubenswrapper[4932]: done Mar 21 09:00:11 crc kubenswrapper[4932]: if [[ $rc -ne 0 ]]; then Mar 21 09:00:11 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:11 crc kubenswrapper[4932]: continue Mar 21 09:00:11 crc kubenswrapper[4932]: fi Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: Mar 21 09:00:11 crc kubenswrapper[4932]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 21 09:00:11 crc kubenswrapper[4932]: # Replace /etc/hosts with our modified version if needed Mar 21 09:00:11 crc kubenswrapper[4932]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 21 09:00:11 crc kubenswrapper[4932]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 21 09:00:11 crc kubenswrapper[4932]: fi Mar 21 09:00:11 crc kubenswrapper[4932]: sleep 60 & wait Mar 21 09:00:11 crc kubenswrapper[4932]: unset svc_ips Mar 21 09:00:11 crc kubenswrapper[4932]: done Mar 21 09:00:11 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7w6tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-5wwpb_openshift-dns(f312294e-78f4-44ca-8dee-96797a8b9205): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:11 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:11 crc kubenswrapper[4932]: E0321 09:00:11.709019 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-5wwpb" podUID="f312294e-78f4-44ca-8dee-96797a8b9205" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.786871 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.786957 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.786976 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.787005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.787024 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.889964 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.890114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.890142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.890181 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.890206 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.945312 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7"] Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.946173 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.948818 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.950146 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.960552 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.975204 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.992193 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.993496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.993552 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.993575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.993603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:11 crc kubenswrapper[4932]: I0321 09:00:11.993622 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:11Z","lastTransitionTime":"2026-03-21T09:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.013210 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.024728 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.035879 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.048593 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.058764 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.062481 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2f066ce-1e24-4e33-8d78-8a5187647c1c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.062581 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2f066ce-1e24-4e33-8d78-8a5187647c1c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.062618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5hd\" (UniqueName: \"kubernetes.io/projected/c2f066ce-1e24-4e33-8d78-8a5187647c1c-kube-api-access-4k5hd\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.062663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2f066ce-1e24-4e33-8d78-8a5187647c1c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.074261 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.083728 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.096759 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.096845 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.096871 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.096906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.096931 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.096994 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.110091 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.128731 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.150310 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.163700 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2f066ce-1e24-4e33-8d78-8a5187647c1c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.163774 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2f066ce-1e24-4e33-8d78-8a5187647c1c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.163797 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5hd\" (UniqueName: \"kubernetes.io/projected/c2f066ce-1e24-4e33-8d78-8a5187647c1c-kube-api-access-4k5hd\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.163822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2f066ce-1e24-4e33-8d78-8a5187647c1c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.165059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c2f066ce-1e24-4e33-8d78-8a5187647c1c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.165327 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c2f066ce-1e24-4e33-8d78-8a5187647c1c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.168032 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.171151 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2f066ce-1e24-4e33-8d78-8a5187647c1c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.179321 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.191454 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5hd\" (UniqueName: \"kubernetes.io/projected/c2f066ce-1e24-4e33-8d78-8a5187647c1c-kube-api-access-4k5hd\") pod \"ovnkube-control-plane-749d76644c-hmlw7\" (UID: \"c2f066ce-1e24-4e33-8d78-8a5187647c1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.199809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.199859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.199870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.199889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.199904 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.264213 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" Mar 21 09:00:12 crc kubenswrapper[4932]: W0321 09:00:12.277076 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2f066ce_1e24_4e33_8d78_8a5187647c1c.slice/crio-0acb5619403cf6a973bd2ab03607786672dd6a88da3b4d8d58040e8ca31e1d22 WatchSource:0}: Error finding container 0acb5619403cf6a973bd2ab03607786672dd6a88da3b4d8d58040e8ca31e1d22: Status 404 returned error can't find the container with id 0acb5619403cf6a973bd2ab03607786672dd6a88da3b4d8d58040e8ca31e1d22 Mar 21 09:00:12 crc kubenswrapper[4932]: E0321 09:00:12.280327 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:12 crc kubenswrapper[4932]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 21 09:00:12 crc kubenswrapper[4932]: set -euo pipefail Mar 21 09:00:12 crc kubenswrapper[4932]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 21 09:00:12 crc kubenswrapper[4932]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 21 09:00:12 crc kubenswrapper[4932]: # As the secret mount is optional we must wait for the files to be present. Mar 21 09:00:12 crc kubenswrapper[4932]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 21 09:00:12 crc kubenswrapper[4932]: TS=$(date +%s) Mar 21 09:00:12 crc kubenswrapper[4932]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 21 09:00:12 crc kubenswrapper[4932]: HAS_LOGGED_INFO=0 Mar 21 09:00:12 crc kubenswrapper[4932]: Mar 21 09:00:12 crc kubenswrapper[4932]: log_missing_certs(){ Mar 21 09:00:12 crc kubenswrapper[4932]: CUR_TS=$(date +%s) Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 21 09:00:12 crc kubenswrapper[4932]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 21 09:00:12 crc kubenswrapper[4932]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 21 09:00:12 crc kubenswrapper[4932]: HAS_LOGGED_INFO=1 Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: } Mar 21 09:00:12 crc kubenswrapper[4932]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 21 09:00:12 crc kubenswrapper[4932]: log_missing_certs Mar 21 09:00:12 crc kubenswrapper[4932]: sleep 5 Mar 21 09:00:12 crc kubenswrapper[4932]: done Mar 21 09:00:12 crc kubenswrapper[4932]: Mar 21 09:00:12 crc kubenswrapper[4932]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 21 09:00:12 crc kubenswrapper[4932]: exec /usr/bin/kube-rbac-proxy \ Mar 21 09:00:12 crc kubenswrapper[4932]: --logtostderr \ Mar 21 09:00:12 crc kubenswrapper[4932]: --secure-listen-address=:9108 \ Mar 21 09:00:12 crc kubenswrapper[4932]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 21 09:00:12 crc kubenswrapper[4932]: --upstream=http://127.0.0.1:29108/ \ Mar 21 09:00:12 crc kubenswrapper[4932]: --tls-private-key-file=${TLS_PK} \ Mar 21 09:00:12 crc kubenswrapper[4932]: --tls-cert-file=${TLS_CERT} Mar 21 09:00:12 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k5hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-hmlw7_openshift-ovn-kubernetes(c2f066ce-1e24-4e33-8d78-8a5187647c1c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:12 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:12 crc kubenswrapper[4932]: E0321 09:00:12.282904 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:12 crc kubenswrapper[4932]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: set -o allexport Mar 21 09:00:12 crc kubenswrapper[4932]: source "/env/_master" Mar 21 09:00:12 crc kubenswrapper[4932]: set +o allexport Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: Mar 21 09:00:12 crc kubenswrapper[4932]: ovn_v4_join_subnet_opt= Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ "" != "" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: ovn_v6_join_subnet_opt= Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ "" != "" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: Mar 21 09:00:12 crc kubenswrapper[4932]: ovn_v4_transit_switch_subnet_opt= Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ "" != "" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: ovn_v6_transit_switch_subnet_opt= Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ "" != "" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: Mar 21 09:00:12 crc kubenswrapper[4932]: dns_name_resolver_enabled_flag= Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ "false" == "true" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: Mar 21 09:00:12 crc kubenswrapper[4932]: persistent_ips_enabled_flag= Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ "true" == "true" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: Mar 21 09:00:12 crc kubenswrapper[4932]: # This is needed so that converting clusters from GA to TP Mar 21 09:00:12 crc kubenswrapper[4932]: # will rollout control plane pods as well Mar 21 09:00:12 crc kubenswrapper[4932]: network_segmentation_enabled_flag= Mar 21 09:00:12 crc kubenswrapper[4932]: multi_network_enabled_flag= Mar 21 09:00:12 crc kubenswrapper[4932]: if [[ "true" == "true" ]]; then Mar 21 09:00:12 crc kubenswrapper[4932]: multi_network_enabled_flag="--enable-multi-network" Mar 21 09:00:12 crc kubenswrapper[4932]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 21 09:00:12 crc kubenswrapper[4932]: fi Mar 21 09:00:12 crc kubenswrapper[4932]: Mar 21 09:00:12 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 21 09:00:12 crc kubenswrapper[4932]: exec /usr/bin/ovnkube \ Mar 21 09:00:12 crc kubenswrapper[4932]: --enable-interconnect \ Mar 21 09:00:12 crc kubenswrapper[4932]: --init-cluster-manager "${K8S_NODE}" \ Mar 21 09:00:12 crc kubenswrapper[4932]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 21 09:00:12 crc kubenswrapper[4932]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 21 09:00:12 crc kubenswrapper[4932]: --metrics-bind-address "127.0.0.1:29108" \ Mar 21 09:00:12 crc kubenswrapper[4932]: --metrics-enable-pprof \ Mar 21 09:00:12 crc kubenswrapper[4932]: --metrics-enable-config-duration \ Mar 21 09:00:12 crc kubenswrapper[4932]: ${ovn_v4_join_subnet_opt} \ Mar 21 09:00:12 crc kubenswrapper[4932]: ${ovn_v6_join_subnet_opt} \ Mar 21 09:00:12 crc kubenswrapper[4932]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 21 09:00:12 crc kubenswrapper[4932]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 21 09:00:12 crc kubenswrapper[4932]: ${dns_name_resolver_enabled_flag} \ Mar 21 09:00:12 crc kubenswrapper[4932]: ${persistent_ips_enabled_flag} \ Mar 21 09:00:12 crc kubenswrapper[4932]: ${multi_network_enabled_flag} \ Mar 21 09:00:12 crc kubenswrapper[4932]: ${network_segmentation_enabled_flag} Mar 21 09:00:12 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k5hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-hmlw7_openshift-ovn-kubernetes(c2f066ce-1e24-4e33-8d78-8a5187647c1c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:12 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:12 crc kubenswrapper[4932]: E0321 09:00:12.284074 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" podUID="c2f066ce-1e24-4e33-8d78-8a5187647c1c" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.302676 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.302724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.302736 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.302755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.302784 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.405941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.406007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.406017 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.406032 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.406043 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.510142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.510191 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.510202 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.510221 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.510238 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.613470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.613513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.613528 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.613545 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.613555 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: E0321 09:00:12.703557 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:12 crc kubenswrapper[4932]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 21 09:00:12 crc kubenswrapper[4932]: apiVersion: v1 Mar 21 09:00:12 crc kubenswrapper[4932]: clusters: Mar 21 09:00:12 crc kubenswrapper[4932]: - cluster: Mar 21 09:00:12 crc kubenswrapper[4932]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 21 09:00:12 crc kubenswrapper[4932]: server: https://api-int.crc.testing:6443 Mar 21 09:00:12 crc kubenswrapper[4932]: name: default-cluster Mar 21 09:00:12 crc kubenswrapper[4932]: contexts: Mar 21 09:00:12 crc kubenswrapper[4932]: - context: Mar 21 09:00:12 crc kubenswrapper[4932]: cluster: default-cluster Mar 21 09:00:12 crc kubenswrapper[4932]: namespace: default Mar 21 09:00:12 crc kubenswrapper[4932]: user: default-auth Mar 21 09:00:12 crc kubenswrapper[4932]: name: default-context Mar 21 09:00:12 crc kubenswrapper[4932]: current-context: default-context Mar 21 09:00:12 crc kubenswrapper[4932]: kind: Config Mar 21 09:00:12 crc kubenswrapper[4932]: preferences: {} Mar 21 09:00:12 crc kubenswrapper[4932]: users: Mar 21 09:00:12 crc kubenswrapper[4932]: - name: default-auth Mar 21 09:00:12 crc kubenswrapper[4932]: user: Mar 21 09:00:12 crc kubenswrapper[4932]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 21 09:00:12 crc kubenswrapper[4932]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 21 09:00:12 crc kubenswrapper[4932]: EOF Mar 21 09:00:12 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dtpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:12 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:12 crc kubenswrapper[4932]: E0321 09:00:12.703981 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chfd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:12 crc kubenswrapper[4932]: E0321 09:00:12.704870 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:00:12 crc kubenswrapper[4932]: E0321 09:00:12.707670 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chfd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:12 crc kubenswrapper[4932]: E0321 09:00:12.708922 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.716335 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.716403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.716421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.716446 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.716461 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.820284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.820398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.820420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.820447 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.820466 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.930433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.930515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.930534 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.930565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:12 crc kubenswrapper[4932]: I0321 09:00:12.930585 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:12Z","lastTransitionTime":"2026-03-21T09:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.033907 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.033961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.033976 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.033995 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.034010 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.138591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.138632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.138644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.138665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.138677 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.221879 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" event={"ID":"c2f066ce-1e24-4e33-8d78-8a5187647c1c","Type":"ContainerStarted","Data":"0acb5619403cf6a973bd2ab03607786672dd6a88da3b4d8d58040e8ca31e1d22"} Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.224005 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:13 crc kubenswrapper[4932]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 21 09:00:13 crc kubenswrapper[4932]: set -euo pipefail Mar 21 09:00:13 crc kubenswrapper[4932]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 21 09:00:13 crc kubenswrapper[4932]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 21 09:00:13 crc kubenswrapper[4932]: # As the secret mount is optional we must wait for the files to be present. Mar 21 09:00:13 crc kubenswrapper[4932]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 21 09:00:13 crc kubenswrapper[4932]: TS=$(date +%s) Mar 21 09:00:13 crc kubenswrapper[4932]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 21 09:00:13 crc kubenswrapper[4932]: HAS_LOGGED_INFO=0 Mar 21 09:00:13 crc kubenswrapper[4932]: Mar 21 09:00:13 crc kubenswrapper[4932]: log_missing_certs(){ Mar 21 09:00:13 crc kubenswrapper[4932]: CUR_TS=$(date +%s) Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 21 09:00:13 crc kubenswrapper[4932]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 21 09:00:13 crc kubenswrapper[4932]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 21 09:00:13 crc kubenswrapper[4932]: HAS_LOGGED_INFO=1 Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: } Mar 21 09:00:13 crc kubenswrapper[4932]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 21 09:00:13 crc kubenswrapper[4932]: log_missing_certs Mar 21 09:00:13 crc kubenswrapper[4932]: sleep 5 Mar 21 09:00:13 crc kubenswrapper[4932]: done Mar 21 09:00:13 crc kubenswrapper[4932]: Mar 21 09:00:13 crc kubenswrapper[4932]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 21 09:00:13 crc kubenswrapper[4932]: exec /usr/bin/kube-rbac-proxy \ Mar 21 09:00:13 crc kubenswrapper[4932]: --logtostderr \ Mar 21 09:00:13 crc kubenswrapper[4932]: --secure-listen-address=:9108 \ Mar 21 09:00:13 crc kubenswrapper[4932]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 21 09:00:13 crc kubenswrapper[4932]: --upstream=http://127.0.0.1:29108/ \ Mar 21 09:00:13 crc kubenswrapper[4932]: --tls-private-key-file=${TLS_PK} \ Mar 21 09:00:13 crc kubenswrapper[4932]: --tls-cert-file=${TLS_CERT} Mar 21 09:00:13 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k5hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-hmlw7_openshift-ovn-kubernetes(c2f066ce-1e24-4e33-8d78-8a5187647c1c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:13 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.227634 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:13 crc kubenswrapper[4932]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ -f "/env/_master" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: set -o allexport Mar 21 09:00:13 crc kubenswrapper[4932]: source "/env/_master" Mar 21 09:00:13 crc kubenswrapper[4932]: set +o allexport Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: Mar 21 09:00:13 crc kubenswrapper[4932]: ovn_v4_join_subnet_opt= Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ "" != "" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: ovn_v6_join_subnet_opt= Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ "" != "" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: Mar 21 09:00:13 crc kubenswrapper[4932]: ovn_v4_transit_switch_subnet_opt= Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ "" != "" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: ovn_v6_transit_switch_subnet_opt= Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ "" != "" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: Mar 21 09:00:13 crc kubenswrapper[4932]: dns_name_resolver_enabled_flag= Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ "false" == "true" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: Mar 21 09:00:13 crc kubenswrapper[4932]: persistent_ips_enabled_flag= Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ "true" == "true" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: Mar 21 09:00:13 crc kubenswrapper[4932]: # This is needed so that converting clusters from GA to TP Mar 21 09:00:13 crc kubenswrapper[4932]: # will rollout control plane pods as well Mar 21 09:00:13 crc kubenswrapper[4932]: network_segmentation_enabled_flag= Mar 21 09:00:13 crc kubenswrapper[4932]: multi_network_enabled_flag= Mar 21 09:00:13 crc kubenswrapper[4932]: if [[ "true" == "true" ]]; then Mar 21 09:00:13 crc kubenswrapper[4932]: multi_network_enabled_flag="--enable-multi-network" Mar 21 09:00:13 crc kubenswrapper[4932]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: Mar 21 09:00:13 crc kubenswrapper[4932]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 21 09:00:13 crc kubenswrapper[4932]: exec /usr/bin/ovnkube \ Mar 21 09:00:13 crc kubenswrapper[4932]: --enable-interconnect \ Mar 21 09:00:13 crc kubenswrapper[4932]: --init-cluster-manager "${K8S_NODE}" \ Mar 21 09:00:13 crc kubenswrapper[4932]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 21 09:00:13 crc kubenswrapper[4932]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 21 09:00:13 crc kubenswrapper[4932]: --metrics-bind-address "127.0.0.1:29108" \ Mar 21 09:00:13 crc kubenswrapper[4932]: --metrics-enable-pprof \ Mar 21 09:00:13 crc kubenswrapper[4932]: --metrics-enable-config-duration \ Mar 21 09:00:13 crc kubenswrapper[4932]: ${ovn_v4_join_subnet_opt} \ Mar 21 09:00:13 crc kubenswrapper[4932]: ${ovn_v6_join_subnet_opt} \ Mar 21 09:00:13 crc kubenswrapper[4932]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 21 09:00:13 crc kubenswrapper[4932]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 21 09:00:13 crc kubenswrapper[4932]: ${dns_name_resolver_enabled_flag} \ Mar 21 09:00:13 crc kubenswrapper[4932]: ${persistent_ips_enabled_flag} \ Mar 21 09:00:13 crc kubenswrapper[4932]: ${multi_network_enabled_flag} \ Mar 21 09:00:13 crc kubenswrapper[4932]: ${network_segmentation_enabled_flag} Mar 21 09:00:13 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4k5hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-hmlw7_openshift-ovn-kubernetes(c2f066ce-1e24-4e33-8d78-8a5187647c1c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:13 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.228869 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" podUID="c2f066ce-1e24-4e33-8d78-8a5187647c1c" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.242651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.242721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.242738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.242786 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.242805 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.244334 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.257693 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.286019 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.302732 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.315533 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.325323 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.346017 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.346108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.346158 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.346186 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.346242 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.354619 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.367171 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.369862 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cpgnf"] Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.370682 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.370876 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.381063 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.394101 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.404598 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.416490 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.435172 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.449339 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.450764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.450811 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.450829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.450852 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.450867 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.462052 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.473369 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.479683 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc64w\" (UniqueName: \"kubernetes.io/projected/fb0a5470-935a-4f5a-9a19-f261a853a79c-kube-api-access-lc64w\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.479919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.494971 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.507980 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.516706 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.525855 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.536479 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.549049 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.556816 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.556854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.556862 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.556879 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.556889 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.563199 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.575110 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.581375 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc64w\" (UniqueName: \"kubernetes.io/projected/fb0a5470-935a-4f5a-9a19-f261a853a79c-kube-api-access-lc64w\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.581600 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.581730 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.581866 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:00:14.081839182 +0000 UTC m=+117.677037661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.587386 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.599155 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.601843 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc64w\" (UniqueName: \"kubernetes.io/projected/fb0a5470-935a-4f5a-9a19-f261a853a79c-kube-api-access-lc64w\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.611107 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.625292 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.645654 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.667234 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.667313 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.667327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.667370 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.667445 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.667170 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.678166 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.691532 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.702242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.702675 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.703793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.703869 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.704014 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.704204 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.704443 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.705175 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:13 crc kubenswrapper[4932]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 21 09:00:13 crc kubenswrapper[4932]: while [ true ]; Mar 21 09:00:13 crc kubenswrapper[4932]: do Mar 21 09:00:13 crc kubenswrapper[4932]: for f in $(ls /tmp/serviceca); do Mar 21 09:00:13 crc kubenswrapper[4932]: echo $f Mar 21 09:00:13 crc kubenswrapper[4932]: ca_file_path="/tmp/serviceca/${f}" Mar 21 09:00:13 crc kubenswrapper[4932]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 21 09:00:13 crc kubenswrapper[4932]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 21 09:00:13 crc kubenswrapper[4932]: if [ -e "${reg_dir_path}" ]; then Mar 21 09:00:13 crc kubenswrapper[4932]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 21 09:00:13 crc kubenswrapper[4932]: else Mar 21 09:00:13 crc kubenswrapper[4932]: mkdir $reg_dir_path Mar 21 09:00:13 crc kubenswrapper[4932]: cp $ca_file_path $reg_dir_path/ca.crt Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: done Mar 21 09:00:13 crc kubenswrapper[4932]: for d in $(ls /etc/docker/certs.d); do Mar 21 09:00:13 crc kubenswrapper[4932]: echo $d Mar 21 09:00:13 crc kubenswrapper[4932]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 21 09:00:13 crc kubenswrapper[4932]: reg_conf_path="/tmp/serviceca/${dp}" Mar 21 09:00:13 crc kubenswrapper[4932]: if [ ! -e "${reg_conf_path}" ]; then Mar 21 09:00:13 crc kubenswrapper[4932]: rm -rf /etc/docker/certs.d/$d Mar 21 09:00:13 crc kubenswrapper[4932]: fi Mar 21 09:00:13 crc kubenswrapper[4932]: done Mar 21 09:00:13 crc kubenswrapper[4932]: sleep 60 & wait ${!} Mar 21 09:00:13 crc kubenswrapper[4932]: done Mar 21 09:00:13 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8bgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-svc74_openshift-image-registry(a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:13 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:13 crc kubenswrapper[4932]: E0321 09:00:13.707405 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-svc74" podUID="a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.772464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.772539 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.772560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.772589 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.772605 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.876281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.876340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.876373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.876394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.876408 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.979851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.979915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.979930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.979956 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:13 crc kubenswrapper[4932]: I0321 09:00:13.979971 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:13Z","lastTransitionTime":"2026-03-21T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.083927 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.083977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.083987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.084007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.084019 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.087952 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.088186 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.088313 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:00:15.088250543 +0000 UTC m=+118.683448812 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.186261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.186323 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.186341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.186409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.186434 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.288628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.288668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.288677 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.288693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.288704 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.392101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.392190 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.392213 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.392246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.392270 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.410886 4932 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.495045 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.495464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.495624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.495807 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.495986 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.599165 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.599239 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.599257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.599281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.599300 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.701421 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.701682 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.701950 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.702046 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.702124 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.702189 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.702619 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.703386 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.703527 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wb7cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-r8kxd_openshift-multus(215b5025-0486-4911-bfbf-25b367a897df): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.703720 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:14 crc kubenswrapper[4932]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 21 09:00:14 crc kubenswrapper[4932]: set -o allexport Mar 21 09:00:14 crc kubenswrapper[4932]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 21 09:00:14 crc kubenswrapper[4932]: source /etc/kubernetes/apiserver-url.env Mar 21 09:00:14 crc kubenswrapper[4932]: else Mar 21 09:00:14 crc kubenswrapper[4932]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 21 09:00:14 crc kubenswrapper[4932]: exit 1 Mar 21 09:00:14 crc kubenswrapper[4932]: fi Mar 21 09:00:14 crc kubenswrapper[4932]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 21 09:00:14 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:14 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.704715 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.704726 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" podUID="215b5025-0486-4911-bfbf-25b367a897df" Mar 21 09:00:14 crc kubenswrapper[4932]: E0321 09:00:14.705824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.760541 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.775329 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.791852 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.805481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.805564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.805590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.805615 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.805635 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.809104 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.824486 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.841224 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.866899 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.884520 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.894558 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.904236 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.907672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.907710 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.907722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.907740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.907755 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:14Z","lastTransitionTime":"2026-03-21T09:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.922569 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.936297 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.948817 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.966526 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.975329 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.985840 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:14 crc kubenswrapper[4932]: I0321 09:00:14.996034 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.009172 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.010061 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.010156 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.010222 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.010289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.010477 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.099266 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:15 crc kubenswrapper[4932]: E0321 09:00:15.099534 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:15 crc kubenswrapper[4932]: E0321 09:00:15.099637 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:00:17.099614132 +0000 UTC m=+120.694812401 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.113311 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.113383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.113410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.113434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.113451 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.215854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.215902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.215914 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.215936 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.215951 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.319406 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.319464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.319478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.319498 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.319512 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.421945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.421982 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.421991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.422006 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.422016 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.525837 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.526333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.526566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.526712 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.526868 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.630687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.630760 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.630790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.630825 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.630851 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.702230 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.702243 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.702277 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:15 crc kubenswrapper[4932]: E0321 09:00:15.702595 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:15 crc kubenswrapper[4932]: E0321 09:00:15.702852 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:15 crc kubenswrapper[4932]: E0321 09:00:15.702957 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.733698 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.733746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.733765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.733789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.733808 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.838082 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.838190 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.838220 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.838260 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.838288 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.942034 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.942116 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.942140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.942175 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:15 crc kubenswrapper[4932]: I0321 09:00:15.942201 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:15Z","lastTransitionTime":"2026-03-21T09:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.045544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.045620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.045637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.045664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.045682 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.149265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.149384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.149414 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.149448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.149469 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.252855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.252946 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.252962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.252985 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.253000 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.357259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.357378 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.357394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.357420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.357436 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.460460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.460529 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.460545 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.460569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.460586 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.564013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.564106 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.564126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.564160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.564189 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.667748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.667842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.667868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.667903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.667926 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.702549 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:16 crc kubenswrapper[4932]: E0321 09:00:16.702863 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:16 crc kubenswrapper[4932]: E0321 09:00:16.705216 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:00:16 crc kubenswrapper[4932]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 21 09:00:16 crc kubenswrapper[4932]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 21 09:00:16 crc kubenswrapper[4932]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58z89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-jmd8j_openshift-multus(a038ce15-d375-452d-b38f-6893df65dee4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 21 09:00:16 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:00:16 crc kubenswrapper[4932]: E0321 09:00:16.707090 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-jmd8j" podUID="a038ce15-d375-452d-b38f-6893df65dee4" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.771608 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.771669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.771686 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.771713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.771732 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.876089 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.876170 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.876189 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.876222 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.876240 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.979859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.979931 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.979944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.979989 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:16 crc kubenswrapper[4932]: I0321 09:00:16.980003 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:16Z","lastTransitionTime":"2026-03-21T09:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.083684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.083730 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.083742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.083761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.083775 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:17Z","lastTransitionTime":"2026-03-21T09:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.124163 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:17 crc kubenswrapper[4932]: E0321 09:00:17.124470 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:17 crc kubenswrapper[4932]: E0321 09:00:17.124580 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:00:21.124550569 +0000 UTC m=+124.719748858 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.186373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.186807 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.186901 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.187003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.187093 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:17Z","lastTransitionTime":"2026-03-21T09:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.290049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.290118 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.290141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.290171 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.290192 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:17Z","lastTransitionTime":"2026-03-21T09:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.394215 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.394297 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.394319 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.394421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.394456 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:17Z","lastTransitionTime":"2026-03-21T09:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.498135 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.498198 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.498218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.498240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.498257 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:17Z","lastTransitionTime":"2026-03-21T09:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.601814 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.601859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.601873 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.601893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.601909 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:17Z","lastTransitionTime":"2026-03-21T09:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.702033 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:17 crc kubenswrapper[4932]: E0321 09:00:17.702080 4932 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.702050 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.702190 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:17 crc kubenswrapper[4932]: E0321 09:00:17.702232 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:17 crc kubenswrapper[4932]: E0321 09:00:17.702431 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:17 crc kubenswrapper[4932]: E0321 09:00:17.702640 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.717316 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.739505 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.753952 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.770483 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.783944 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.796223 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.811562 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.824253 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.839258 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.848622 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.873203 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.882993 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.895280 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.906832 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.923257 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.935698 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:17 crc kubenswrapper[4932]: I0321 09:00:17.943828 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:18 crc kubenswrapper[4932]: E0321 09:00:18.151582 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:18 crc kubenswrapper[4932]: I0321 09:00:18.701325 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:18 crc kubenswrapper[4932]: E0321 09:00:18.701610 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:19 crc kubenswrapper[4932]: I0321 09:00:19.532681 4932 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 21 09:00:19 crc kubenswrapper[4932]: I0321 09:00:19.702146 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:19 crc kubenswrapper[4932]: I0321 09:00:19.702205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:19 crc kubenswrapper[4932]: E0321 09:00:19.702393 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:19 crc kubenswrapper[4932]: I0321 09:00:19.702414 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:19 crc kubenswrapper[4932]: E0321 09:00:19.702522 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:19 crc kubenswrapper[4932]: E0321 09:00:19.702650 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.421799 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.421854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.421867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.421887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.421901 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:20Z","lastTransitionTime":"2026-03-21T09:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:20 crc kubenswrapper[4932]: E0321 09:00:20.437803 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.442991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.443047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.443067 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.443094 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.443116 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:20Z","lastTransitionTime":"2026-03-21T09:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:20 crc kubenswrapper[4932]: E0321 09:00:20.460765 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.465826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.465876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.465891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.465911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.465925 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:20Z","lastTransitionTime":"2026-03-21T09:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:20 crc kubenswrapper[4932]: E0321 09:00:20.477867 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.482209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.482247 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.482259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.482278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.482295 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:20Z","lastTransitionTime":"2026-03-21T09:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:20 crc kubenswrapper[4932]: E0321 09:00:20.495670 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.506377 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.506497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.506517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.506541 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.506557 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:20Z","lastTransitionTime":"2026-03-21T09:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:20 crc kubenswrapper[4932]: E0321 09:00:20.518320 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:20 crc kubenswrapper[4932]: E0321 09:00:20.518526 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:00:20 crc kubenswrapper[4932]: I0321 09:00:20.702431 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:20 crc kubenswrapper[4932]: E0321 09:00:20.702579 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:21 crc kubenswrapper[4932]: I0321 09:00:21.172267 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:21 crc kubenswrapper[4932]: E0321 09:00:21.172485 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:21 crc kubenswrapper[4932]: E0321 09:00:21.172577 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:00:29.172549744 +0000 UTC m=+132.767748033 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:21 crc kubenswrapper[4932]: I0321 09:00:21.701576 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:21 crc kubenswrapper[4932]: I0321 09:00:21.701699 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:21 crc kubenswrapper[4932]: I0321 09:00:21.701603 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:21 crc kubenswrapper[4932]: E0321 09:00:21.701837 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:21 crc kubenswrapper[4932]: E0321 09:00:21.702161 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:21 crc kubenswrapper[4932]: E0321 09:00:21.702500 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:21 crc kubenswrapper[4932]: I0321 09:00:21.720097 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.260402 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b"} Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.260479 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad"} Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.281019 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.302154 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.313505 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.324227 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.333835 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.347800 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.363791 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.376722 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.388942 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.400280 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.411654 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.422888 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.445513 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.460276 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.469120 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.476919 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.492790 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.505856 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 09:00:22 crc kubenswrapper[4932]: I0321 09:00:22.702182 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:22 crc kubenswrapper[4932]: E0321 09:00:22.702843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:23 crc kubenswrapper[4932]: E0321 09:00:23.152467 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:23 crc kubenswrapper[4932]: I0321 09:00:23.702425 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:23 crc kubenswrapper[4932]: I0321 09:00:23.702494 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:23 crc kubenswrapper[4932]: I0321 09:00:23.702613 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:23 crc kubenswrapper[4932]: E0321 09:00:23.702735 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:23 crc kubenswrapper[4932]: E0321 09:00:23.702809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:23 crc kubenswrapper[4932]: E0321 09:00:23.702892 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.266793 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5wwpb" event={"ID":"f312294e-78f4-44ca-8dee-96797a8b9205","Type":"ContainerStarted","Data":"f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf"} Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.281279 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.297000 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.310710 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.324565 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.336742 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.354864 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.369236 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.380413 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.393460 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.414196 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.427762 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.442895 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.457506 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.467482 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.479182 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.489673 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.501976 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.515447 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:24Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:24 crc kubenswrapper[4932]: I0321 09:00:24.702268 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:24 crc kubenswrapper[4932]: E0321 09:00:24.702407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.271560 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c"} Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.271631 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb"} Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.289884 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.309151 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.323117 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.334701 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.347604 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.359852 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.380591 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.396936 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.407938 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.420188 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.435512 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.449922 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.464451 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.481844 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.498023 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.511105 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.522144 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.541467 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:25Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.702098 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.702155 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:25 crc kubenswrapper[4932]: E0321 09:00:25.702590 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:25 crc kubenswrapper[4932]: E0321 09:00:25.702661 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:25 crc kubenswrapper[4932]: I0321 09:00:25.702166 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:25 crc kubenswrapper[4932]: E0321 09:00:25.702746 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:26 crc kubenswrapper[4932]: I0321 09:00:26.701545 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:26 crc kubenswrapper[4932]: E0321 09:00:26.702204 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.279158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-svc74" event={"ID":"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb","Type":"ContainerStarted","Data":"ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb"} Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.296786 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.325873 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.349052 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.372763 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.394818 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.412520 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.426830 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.445612 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.467640 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.485194 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.502955 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.517386 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.532004 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.550765 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.567878 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.589182 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.608789 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.622532 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.702380 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:27 crc kubenswrapper[4932]: E0321 09:00:27.702553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.702380 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.702741 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:27 crc kubenswrapper[4932]: E0321 09:00:27.703204 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:27 crc kubenswrapper[4932]: E0321 09:00:27.703502 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.732268 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.748945 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.767892 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.781145 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.793569 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.810542 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.824956 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.839294 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.854684 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.868187 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.883519 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.899785 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.914184 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.931954 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.944098 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.959340 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.974396 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:27 crc kubenswrapper[4932]: I0321 09:00:27.997274 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: E0321 09:00:28.157695 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.284092 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" event={"ID":"c2f066ce-1e24-4e33-8d78-8a5187647c1c","Type":"ContainerStarted","Data":"0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd"} Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.284180 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" event={"ID":"c2f066ce-1e24-4e33-8d78-8a5187647c1c","Type":"ContainerStarted","Data":"94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d"} Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.286071 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c" exitCode=0 Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.286130 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.306935 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.321632 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.332261 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.351869 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.367426 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.383798 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.396102 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.412740 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.427225 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.438294 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.450945 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.468369 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.482153 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.498678 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.509378 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.525365 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.538447 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.560794 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.574602 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.587434 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.637094 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.650561 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.665409 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.681256 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.696965 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.702010 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:28 crc kubenswrapper[4932]: E0321 09:00:28.702160 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.709931 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.723142 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.738502 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.755840 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.776469 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.794062 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.808121 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.825105 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.846650 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.862213 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:28 crc kubenswrapper[4932]: I0321 09:00:28.878516 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.270842 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:29 crc kubenswrapper[4932]: E0321 09:00:29.271085 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:29 crc kubenswrapper[4932]: E0321 09:00:29.271163 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:00:45.271144846 +0000 UTC m=+148.866343125 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.297810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.297871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.297891 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.297905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.297917 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.297931 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.299366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerDied","Data":"a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c"} Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.299391 4932 generic.go:334] "Generic (PLEG): container finished" podID="215b5025-0486-4911-bfbf-25b367a897df" containerID="a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c" exitCode=0 Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.316248 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.334128 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.348737 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.371155 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.385625 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.396087 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.408360 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.420652 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.440233 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.454046 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.465660 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.477343 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.488590 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.500229 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.513208 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.529850 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.543718 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.557799 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:29Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.701623 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.701696 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:29 crc kubenswrapper[4932]: I0321 09:00:29.701853 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:29 crc kubenswrapper[4932]: E0321 09:00:29.701861 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:29 crc kubenswrapper[4932]: E0321 09:00:29.701977 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:29 crc kubenswrapper[4932]: E0321 09:00:29.702064 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.305448 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8"} Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.309330 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerStarted","Data":"acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1"} Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.327676 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.350641 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.388325 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.412473 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.428168 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.442437 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.465493 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.481310 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.499950 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.516880 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.531237 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.546473 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.558920 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.573791 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.587897 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.603167 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.614713 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.626929 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.638760 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.653762 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.677743 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.691787 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.702427 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:30 crc kubenswrapper[4932]: E0321 09:00:30.702673 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.704260 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.715313 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.727075 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.740907 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.752195 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.764363 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.778670 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.793858 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.811138 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.832912 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.851419 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.869055 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.869089 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.869102 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.869120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.869134 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:30Z","lastTransitionTime":"2026-03-21T09:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.876515 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: E0321 09:00:30.886846 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.892164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.892223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.892240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.892262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.892276 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:30Z","lastTransitionTime":"2026-03-21T09:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.896492 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: E0321 09:00:30.906517 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.912453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.912496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.912509 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.912529 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.912544 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:30Z","lastTransitionTime":"2026-03-21T09:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.912863 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: E0321 09:00:30.927525 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.932434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.932491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.932504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.932524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.932536 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:30Z","lastTransitionTime":"2026-03-21T09:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:30 crc kubenswrapper[4932]: E0321 09:00:30.950628 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.954209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.954249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.954265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.954285 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:30 crc kubenswrapper[4932]: I0321 09:00:30.954296 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:30Z","lastTransitionTime":"2026-03-21T09:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:30 crc kubenswrapper[4932]: E0321 09:00:30.968312 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:30 crc kubenswrapper[4932]: E0321 09:00:30.968591 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.317249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.318645 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590"} Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.322169 4932 generic.go:334] "Generic (PLEG): container finished" podID="215b5025-0486-4911-bfbf-25b367a897df" containerID="acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1" exitCode=0 Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.322233 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerDied","Data":"acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1"} Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.343823 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.363494 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.382457 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.395310 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.414284 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.431312 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.460053 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.482779 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.505673 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.524476 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.546751 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.615297 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.640697 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.658753 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.673068 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.684462 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.698073 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.701938 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.702336 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:31 crc kubenswrapper[4932]: E0321 09:00:31.702430 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:31 crc kubenswrapper[4932]: E0321 09:00:31.702609 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.703120 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:31 crc kubenswrapper[4932]: E0321 09:00:31.703242 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.713947 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.739649 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.758953 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.780738 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.797844 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.813700 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.824887 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.839140 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.853436 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.866398 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.880130 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.894625 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.907946 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.920762 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.938746 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.959873 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:31 crc kubenswrapper[4932]: I0321 09:00:31.982041 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.000249 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:31Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.017094 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.327768 4932 generic.go:334] "Generic (PLEG): container finished" podID="215b5025-0486-4911-bfbf-25b367a897df" containerID="fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02" exitCode=0 Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.327858 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerDied","Data":"fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02"} Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.334431 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jmd8j" event={"ID":"a038ce15-d375-452d-b38f-6893df65dee4","Type":"ContainerStarted","Data":"41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9"} Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.341556 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.352407 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.405929 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.422008 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.438508 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.451704 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.466537 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.480959 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.494603 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.510510 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.526908 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.552898 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.570642 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.583606 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.599736 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.624455 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.641937 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.656575 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.670130 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.683665 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.701401 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:32 crc kubenswrapper[4932]: E0321 09:00:32.701716 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.702272 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.715699 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.732375 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.747048 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.771155 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.811008 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.854221 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.894584 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.934050 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:32 crc kubenswrapper[4932]: I0321 09:00:32.976581 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.012301 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.048991 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.091131 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.135400 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: E0321 09:00:33.159458 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.173752 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.214130 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.345052 4932 generic.go:334] "Generic (PLEG): container finished" podID="215b5025-0486-4911-bfbf-25b367a897df" containerID="918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd" exitCode=0 Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.345143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerDied","Data":"918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd"} Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.374337 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.401706 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.419816 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.443536 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.457832 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.473106 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.491369 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.530857 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.572907 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.612135 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.651938 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.692751 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.702488 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.702513 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:33 crc kubenswrapper[4932]: E0321 09:00:33.702623 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.702513 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:33 crc kubenswrapper[4932]: E0321 09:00:33.702810 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:33 crc kubenswrapper[4932]: E0321 09:00:33.702862 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.733682 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.775107 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.812518 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.870522 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.892461 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:33 crc kubenswrapper[4932]: I0321 09:00:33.930756 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:33Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.352777 4932 generic.go:334] "Generic (PLEG): container finished" podID="215b5025-0486-4911-bfbf-25b367a897df" containerID="b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975" exitCode=0 Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.352815 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerDied","Data":"b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975"} Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.369616 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2"} Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.369936 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.369975 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.373624 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.390822 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.396566 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.407110 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.421286 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.438245 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.459889 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.475461 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.486241 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.495571 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.514217 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.530406 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.547892 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.563814 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.577875 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.594409 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.606774 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.621817 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.666493 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.701963 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:34 crc kubenswrapper[4932]: E0321 09:00:34.702142 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.711939 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.731912 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.770135 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.808998 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.851332 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.895429 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.933718 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:34 crc kubenswrapper[4932]: I0321 09:00:34.976912 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:34Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.015998 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.051878 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.092665 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.133429 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.174720 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.220502 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.248835 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.291556 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.330437 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.376583 4932 generic.go:334] "Generic (PLEG): container finished" podID="215b5025-0486-4911-bfbf-25b367a897df" containerID="8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d" exitCode=0 Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.376672 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerDied","Data":"8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d"} Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.377045 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.397630 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.411436 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.411689 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.450365 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.492217 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.530561 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.576788 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.611709 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.655935 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.691322 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.701904 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.701904 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.701911 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:35 crc kubenswrapper[4932]: E0321 09:00:35.702182 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:35 crc kubenswrapper[4932]: E0321 09:00:35.702271 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:35 crc kubenswrapper[4932]: E0321 09:00:35.702024 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.731223 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.770614 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.810561 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.855969 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.892787 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.931893 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:35 crc kubenswrapper[4932]: I0321 09:00:35.969107 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:35Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.016531 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.052260 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.092536 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.140832 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.170870 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.209184 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.254828 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.332818 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.354183 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.370950 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.386209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" event={"ID":"215b5025-0486-4911-bfbf-25b367a897df","Type":"ContainerStarted","Data":"dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c"} Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.388473 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/0.log" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.391287 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2" exitCode=1 Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.391330 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2"} Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.392041 4932 scope.go:117] "RemoveContainer" containerID="772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.416367 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.450899 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.490050 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.530727 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.573488 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.610649 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.651900 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.692123 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.701738 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:36 crc kubenswrapper[4932]: E0321 09:00:36.701890 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.731018 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.777609 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.812195 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.866493 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.893370 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.934773 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:36 crc kubenswrapper[4932]: I0321 09:00:36.978985 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:36Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.017108 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0321 09:00:35.775396 6889 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.775449 6889 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0321 09:00:35.775504 6889 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0321 09:00:35.775555 6889 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.775563 6889 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0321 09:00:35.775584 6889 factory.go:656] Stopping watch factory\\\\nI0321 09:00:35.775614 6889 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:35.775628 6889 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:35.775741 6889 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0321 09:00:35.775981 6889 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.776338 6889 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.057039 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.092521 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.129296 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.170369 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.213524 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.254275 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.290497 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.336311 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.373539 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.396687 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/0.log" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.399739 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013"} Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.400572 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.409870 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.449620 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.492786 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.530828 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.577953 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0321 09:00:35.775396 6889 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.775449 6889 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0321 09:00:35.775504 6889 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0321 09:00:35.775555 6889 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.775563 6889 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0321 09:00:35.775584 6889 factory.go:656] Stopping watch factory\\\\nI0321 09:00:35.775614 6889 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:35.775628 6889 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:35.775741 6889 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0321 09:00:35.775981 6889 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.776338 6889 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.614743 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.652149 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.668502 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.668715 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.668957 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:01:41.668922895 +0000 UTC m=+205.264121214 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.669035 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.669057 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.669129 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 09:01:41.66911363 +0000 UTC m=+205.264311899 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.669181 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.669277 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 09:01:41.669252615 +0000 UTC m=+205.264450894 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.690850 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.702613 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.702664 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.702796 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.702879 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.703404 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.703566 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.733163 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.750700 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.769564 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.769625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.769728 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.769748 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.769761 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.769802 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 09:01:41.769789329 +0000 UTC m=+205.364987588 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.769729 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.769851 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.769866 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:00:37 crc kubenswrapper[4932]: E0321 09:00:37.769902 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 09:01:41.769889102 +0000 UTC m=+205.365087381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.797200 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.831726 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.870804 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.911024 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.952210 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:37 crc kubenswrapper[4932]: I0321 09:00:37.991500 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.033256 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.074229 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.116476 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.152492 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: E0321 09:00:38.160404 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.192037 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.232264 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.273323 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.310512 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.353121 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.399300 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.404896 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/1.log" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.405665 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/0.log" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.408636 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013" exitCode=1 Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.408685 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013"} Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.408762 4932 scope.go:117] "RemoveContainer" containerID="772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.409490 4932 scope.go:117] "RemoveContainer" containerID="5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013" Mar 21 09:00:38 crc kubenswrapper[4932]: E0321 09:00:38.409673 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.431786 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.476063 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0321 09:00:35.775396 6889 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.775449 6889 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0321 09:00:35.775504 6889 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0321 09:00:35.775555 6889 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.775563 6889 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0321 09:00:35.775584 6889 factory.go:656] Stopping watch factory\\\\nI0321 09:00:35.775614 6889 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:35.775628 6889 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:35.775741 6889 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0321 09:00:35.775981 6889 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.776338 6889 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.513494 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.551243 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.587767 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.631337 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.678940 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.702108 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:38 crc kubenswrapper[4932]: E0321 09:00:38.702296 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.716311 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.749597 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.790441 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.830038 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.869863 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.910811 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.954914 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:38 crc kubenswrapper[4932]: I0321 09:00:38.993928 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:38Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.033908 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.074842 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.113176 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.155975 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.193310 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.230192 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.271716 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.310993 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.361958 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://772ab4730c053bfcf47fb759cf6b86bde600be18c54ac899cf5443c94eb0dee2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0321 09:00:35.775396 6889 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.775449 6889 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0321 09:00:35.775504 6889 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0321 09:00:35.775555 6889 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.775563 6889 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0321 09:00:35.775584 6889 factory.go:656] Stopping watch factory\\\\nI0321 09:00:35.775614 6889 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:35.775628 6889 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:35.775741 6889 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0321 09:00:35.775981 6889 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0321 09:00:35.776338 6889 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:37Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-m4n7b\\\\nI0321 09:00:37.405272 7074 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0321 09:00:37.405465 7074 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0321 09:00:37.405224 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z]\\\\nI032\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.393847 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.414936 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/1.log" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.418869 4932 scope.go:117] "RemoveContainer" containerID="5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013" Mar 21 09:00:39 crc kubenswrapper[4932]: E0321 09:00:39.419037 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.436289 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.478568 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.510628 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.553449 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.594303 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.629808 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.673084 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.701810 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.701909 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.701815 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:39 crc kubenswrapper[4932]: E0321 09:00:39.702017 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:39 crc kubenswrapper[4932]: E0321 09:00:39.702232 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:39 crc kubenswrapper[4932]: E0321 09:00:39.702286 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.710664 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.756862 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.809459 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.835058 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.870173 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.910007 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.955972 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:39 crc kubenswrapper[4932]: I0321 09:00:39.990455 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:39Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.037005 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.078095 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.112763 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.150957 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.189608 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.243637 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:37Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-m4n7b\\\\nI0321 09:00:37.405272 7074 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0321 09:00:37.405465 7074 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0321 09:00:37.405224 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z]\\\\nI032\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.274238 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.310536 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.350924 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.393372 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.433729 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.477671 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.516304 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.554202 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:40Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:40 crc kubenswrapper[4932]: I0321 09:00:40.702315 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:40 crc kubenswrapper[4932]: E0321 09:00:40.702551 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.039827 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.039894 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.039906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.039927 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.039942 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:41Z","lastTransitionTime":"2026-03-21T09:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.057608 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:41Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.062566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.062651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.062681 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.062718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.062752 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:41Z","lastTransitionTime":"2026-03-21T09:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.082921 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:41Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.091130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.091203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.091220 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.091259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.091275 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:41Z","lastTransitionTime":"2026-03-21T09:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.107714 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:41Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.112650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.112696 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.112707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.112732 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.112748 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:41Z","lastTransitionTime":"2026-03-21T09:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.135271 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:41Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.140463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.140535 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.140557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.140590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.140612 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:41Z","lastTransitionTime":"2026-03-21T09:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.160087 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:41Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.160326 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.702513 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.702577 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:41 crc kubenswrapper[4932]: I0321 09:00:41.702656 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.702806 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.702932 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:41 crc kubenswrapper[4932]: E0321 09:00:41.703006 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:42 crc kubenswrapper[4932]: I0321 09:00:42.701962 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:42 crc kubenswrapper[4932]: E0321 09:00:42.702187 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:43 crc kubenswrapper[4932]: E0321 09:00:43.162411 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:43 crc kubenswrapper[4932]: I0321 09:00:43.701598 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:43 crc kubenswrapper[4932]: I0321 09:00:43.701679 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:43 crc kubenswrapper[4932]: I0321 09:00:43.701702 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:43 crc kubenswrapper[4932]: E0321 09:00:43.701830 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:43 crc kubenswrapper[4932]: E0321 09:00:43.702036 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:43 crc kubenswrapper[4932]: E0321 09:00:43.702232 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:44 crc kubenswrapper[4932]: I0321 09:00:44.702038 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:44 crc kubenswrapper[4932]: E0321 09:00:44.702308 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:45 crc kubenswrapper[4932]: I0321 09:00:45.350121 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:45 crc kubenswrapper[4932]: E0321 09:00:45.350399 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:45 crc kubenswrapper[4932]: E0321 09:00:45.350495 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:01:17.35046465 +0000 UTC m=+180.945662959 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:00:45 crc kubenswrapper[4932]: I0321 09:00:45.701711 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:45 crc kubenswrapper[4932]: I0321 09:00:45.701798 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:45 crc kubenswrapper[4932]: E0321 09:00:45.701916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:45 crc kubenswrapper[4932]: I0321 09:00:45.701981 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:45 crc kubenswrapper[4932]: E0321 09:00:45.702118 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:45 crc kubenswrapper[4932]: E0321 09:00:45.702219 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:46 crc kubenswrapper[4932]: I0321 09:00:46.702275 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:46 crc kubenswrapper[4932]: E0321 09:00:46.702781 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.701562 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.701591 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:47 crc kubenswrapper[4932]: E0321 09:00:47.701759 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.701834 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:47 crc kubenswrapper[4932]: E0321 09:00:47.701986 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:47 crc kubenswrapper[4932]: E0321 09:00:47.702055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.720326 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.737523 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.756615 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.772972 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.787073 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.804054 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.828586 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.844911 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.859283 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.876538 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.888516 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.901289 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.914937 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.936998 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:37Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-m4n7b\\\\nI0321 09:00:37.405272 7074 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0321 09:00:37.405465 7074 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0321 09:00:37.405224 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z]\\\\nI032\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.954468 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.972678 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:47 crc kubenswrapper[4932]: I0321 09:00:47.991702 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:47Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:48 crc kubenswrapper[4932]: I0321 09:00:48.005072 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:48Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:48 crc kubenswrapper[4932]: I0321 09:00:48.033651 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:48Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:48 crc kubenswrapper[4932]: E0321 09:00:48.163135 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:48 crc kubenswrapper[4932]: I0321 09:00:48.701737 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:48 crc kubenswrapper[4932]: E0321 09:00:48.701896 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:49 crc kubenswrapper[4932]: I0321 09:00:49.702276 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:49 crc kubenswrapper[4932]: I0321 09:00:49.702585 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:49 crc kubenswrapper[4932]: E0321 09:00:49.702748 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:49 crc kubenswrapper[4932]: I0321 09:00:49.703034 4932 scope.go:117] "RemoveContainer" containerID="5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013" Mar 21 09:00:49 crc kubenswrapper[4932]: I0321 09:00:49.703254 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:49 crc kubenswrapper[4932]: E0321 09:00:49.703315 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:49 crc kubenswrapper[4932]: E0321 09:00:49.703573 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.468560 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/1.log" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.471790 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4"} Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.472283 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.487433 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.502506 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.516504 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.533470 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.552260 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:37Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-m4n7b\\\\nI0321 09:00:37.405272 7074 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0321 09:00:37.405465 7074 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0321 09:00:37.405224 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z]\\\\nI032\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.568558 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.582368 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.594081 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.609676 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.634865 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.652434 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.664680 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.679575 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.690678 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.702169 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.702156 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: E0321 09:00:50.702335 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.713074 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.726630 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.741319 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:50 crc kubenswrapper[4932]: I0321 09:00:50.754407 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:50Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.475723 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/2.log" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.476455 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/1.log" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.478229 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4" exitCode=1 Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.478270 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4"} Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.478308 4932 scope.go:117] "RemoveContainer" containerID="5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.478966 4932 scope.go:117] "RemoveContainer" containerID="3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.479112 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.503071 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.520166 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.520319 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.520375 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.520394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.520413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.520426 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:51Z","lastTransitionTime":"2026-03-21T09:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.533039 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.537130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.537198 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.537216 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.537249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.537268 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:51Z","lastTransitionTime":"2026-03-21T09:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.540887 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.550663 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.554286 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.555429 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.555462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.555474 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.555492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.555504 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:51Z","lastTransitionTime":"2026-03-21T09:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.565205 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.567989 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.573683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.573724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.573737 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.573757 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.573794 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:51Z","lastTransitionTime":"2026-03-21T09:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.579441 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.588122 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.591711 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.593216 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.593243 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.593254 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.593272 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.593286 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:00:51Z","lastTransitionTime":"2026-03-21T09:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.603444 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.605158 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.605263 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.613753 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.622898 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.632637 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.641153 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.652391 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.664093 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.681092 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b18c2b67b245a7b6da150e022f707c6bcb10d3d4e204a40580184daf620d013\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:37Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-m4n7b\\\\nI0321 09:00:37.405272 7074 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI0321 09:00:37.405465 7074 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nF0321 09:00:37.405224 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:37Z is after 2025-08-24T17:21:41Z]\\\\nI032\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.694614 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.703375 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.703495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.703604 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.703747 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.703954 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:51 crc kubenswrapper[4932]: E0321 09:00:51.704068 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.713732 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.724900 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:51 crc kubenswrapper[4932]: I0321 09:00:51.733940 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:51Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.486794 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/2.log" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.493056 4932 scope.go:117] "RemoveContainer" containerID="3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4" Mar 21 09:00:52 crc kubenswrapper[4932]: E0321 09:00:52.493614 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.520914 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.548157 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.561145 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.576395 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.592967 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.606739 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.621298 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.635623 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.649674 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.664398 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.701458 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:52 crc kubenswrapper[4932]: E0321 09:00:52.701640 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.713748 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.730711 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.741993 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.752734 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.764811 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.775845 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.786755 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.813998 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:52 crc kubenswrapper[4932]: I0321 09:00:52.833852 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:52Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:53 crc kubenswrapper[4932]: E0321 09:00:53.164335 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:53 crc kubenswrapper[4932]: I0321 09:00:53.701914 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:53 crc kubenswrapper[4932]: I0321 09:00:53.701929 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:53 crc kubenswrapper[4932]: E0321 09:00:53.702169 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:53 crc kubenswrapper[4932]: E0321 09:00:53.702406 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:53 crc kubenswrapper[4932]: I0321 09:00:53.701953 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:53 crc kubenswrapper[4932]: E0321 09:00:53.702759 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:54 crc kubenswrapper[4932]: I0321 09:00:54.702329 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:54 crc kubenswrapper[4932]: E0321 09:00:54.702600 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:55 crc kubenswrapper[4932]: I0321 09:00:55.701539 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:55 crc kubenswrapper[4932]: I0321 09:00:55.701564 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:55 crc kubenswrapper[4932]: E0321 09:00:55.702038 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:55 crc kubenswrapper[4932]: E0321 09:00:55.702214 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:55 crc kubenswrapper[4932]: I0321 09:00:55.701638 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:55 crc kubenswrapper[4932]: E0321 09:00:55.702383 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:56 crc kubenswrapper[4932]: I0321 09:00:56.701749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:56 crc kubenswrapper[4932]: E0321 09:00:56.701917 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.701570 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.701588 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:57 crc kubenswrapper[4932]: E0321 09:00:57.701807 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.701892 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:57 crc kubenswrapper[4932]: E0321 09:00:57.701909 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:57 crc kubenswrapper[4932]: E0321 09:00:57.702117 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.739752 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.761812 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.775555 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.790173 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.807703 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.823251 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.838731 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.852403 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.863582 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.875929 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.889374 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.903934 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.918488 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.935124 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.950880 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.970944 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:57 crc kubenswrapper[4932]: I0321 09:00:57.995128 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:57Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:58 crc kubenswrapper[4932]: I0321 09:00:58.029021 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:58Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:58 crc kubenswrapper[4932]: I0321 09:00:58.049379 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:00:58Z is after 2025-08-24T17:21:41Z" Mar 21 09:00:58 crc kubenswrapper[4932]: E0321 09:00:58.165929 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:00:58 crc kubenswrapper[4932]: I0321 09:00:58.701647 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:00:58 crc kubenswrapper[4932]: E0321 09:00:58.701901 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:00:59 crc kubenswrapper[4932]: I0321 09:00:59.702131 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:00:59 crc kubenswrapper[4932]: E0321 09:00:59.702335 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:00:59 crc kubenswrapper[4932]: I0321 09:00:59.702710 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:00:59 crc kubenswrapper[4932]: E0321 09:00:59.702865 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:00:59 crc kubenswrapper[4932]: I0321 09:00:59.702710 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:00:59 crc kubenswrapper[4932]: E0321 09:00:59.703064 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:00 crc kubenswrapper[4932]: I0321 09:01:00.702408 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:00 crc kubenswrapper[4932]: E0321 09:01:00.702619 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.702082 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.702231 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.702281 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.702407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.703091 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.703370 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.731279 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.731320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.731333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.731377 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.731390 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:01Z","lastTransitionTime":"2026-03-21T09:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.747545 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:01Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.752339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.752386 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.752397 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.752411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.752422 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:01Z","lastTransitionTime":"2026-03-21T09:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.767804 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:01Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.772613 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.772645 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.772665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.772682 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.772695 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:01Z","lastTransitionTime":"2026-03-21T09:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.789468 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:01Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.795453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.795498 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.795519 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.795544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.795563 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:01Z","lastTransitionTime":"2026-03-21T09:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.815079 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:01Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.820694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.820750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.820769 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.820796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:01 crc kubenswrapper[4932]: I0321 09:01:01.820814 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:01Z","lastTransitionTime":"2026-03-21T09:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.836818 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:01Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:01 crc kubenswrapper[4932]: E0321 09:01:01.837122 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:01:02 crc kubenswrapper[4932]: I0321 09:01:02.702306 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:02 crc kubenswrapper[4932]: E0321 09:01:02.702482 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:03 crc kubenswrapper[4932]: E0321 09:01:03.167075 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:03 crc kubenswrapper[4932]: I0321 09:01:03.701671 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:03 crc kubenswrapper[4932]: I0321 09:01:03.701735 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:03 crc kubenswrapper[4932]: I0321 09:01:03.701708 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:03 crc kubenswrapper[4932]: E0321 09:01:03.701862 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:03 crc kubenswrapper[4932]: E0321 09:01:03.701969 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:03 crc kubenswrapper[4932]: E0321 09:01:03.702075 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:04 crc kubenswrapper[4932]: I0321 09:01:04.701474 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:04 crc kubenswrapper[4932]: E0321 09:01:04.701663 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:05 crc kubenswrapper[4932]: I0321 09:01:05.702517 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:05 crc kubenswrapper[4932]: E0321 09:01:05.703744 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:05 crc kubenswrapper[4932]: I0321 09:01:05.702876 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:05 crc kubenswrapper[4932]: E0321 09:01:05.703893 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:05 crc kubenswrapper[4932]: I0321 09:01:05.702639 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:05 crc kubenswrapper[4932]: E0321 09:01:05.703982 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:06 crc kubenswrapper[4932]: I0321 09:01:06.701305 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:06 crc kubenswrapper[4932]: E0321 09:01:06.701536 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.701630 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.701667 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.702679 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:07 crc kubenswrapper[4932]: E0321 09:01:07.705204 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:07 crc kubenswrapper[4932]: E0321 09:01:07.705788 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:07 crc kubenswrapper[4932]: E0321 09:01:07.706987 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.707616 4932 scope.go:117] "RemoveContainer" containerID="3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4" Mar 21 09:01:07 crc kubenswrapper[4932]: E0321 09:01:07.708153 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.732045 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.754022 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.770492 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.794707 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.809497 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.847583 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.882734 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.903500 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.919268 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.937100 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.953393 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.973668 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:07 crc kubenswrapper[4932]: I0321 09:01:07.987417 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:07Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:08 crc kubenswrapper[4932]: I0321 09:01:08.006937 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:08Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:08 crc kubenswrapper[4932]: I0321 09:01:08.024251 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:08Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:08 crc kubenswrapper[4932]: I0321 09:01:08.039406 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:08Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:08 crc kubenswrapper[4932]: I0321 09:01:08.057214 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:08Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:08 crc kubenswrapper[4932]: I0321 09:01:08.075255 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:08Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:08 crc kubenswrapper[4932]: I0321 09:01:08.093242 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:08Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:08 crc kubenswrapper[4932]: E0321 09:01:08.168175 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:08 crc kubenswrapper[4932]: I0321 09:01:08.702504 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:08 crc kubenswrapper[4932]: E0321 09:01:08.702756 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:09 crc kubenswrapper[4932]: I0321 09:01:09.701340 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:09 crc kubenswrapper[4932]: I0321 09:01:09.701441 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:09 crc kubenswrapper[4932]: I0321 09:01:09.701398 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:09 crc kubenswrapper[4932]: E0321 09:01:09.701626 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:09 crc kubenswrapper[4932]: E0321 09:01:09.701878 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:09 crc kubenswrapper[4932]: E0321 09:01:09.702133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:10 crc kubenswrapper[4932]: I0321 09:01:10.701475 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:10 crc kubenswrapper[4932]: E0321 09:01:10.702255 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:11 crc kubenswrapper[4932]: I0321 09:01:11.702089 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:11 crc kubenswrapper[4932]: E0321 09:01:11.702243 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:11 crc kubenswrapper[4932]: I0321 09:01:11.702287 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:11 crc kubenswrapper[4932]: E0321 09:01:11.702644 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:11 crc kubenswrapper[4932]: I0321 09:01:11.702888 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:11 crc kubenswrapper[4932]: E0321 09:01:11.703113 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.182911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.182962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.182976 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.182995 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.183006 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:12Z","lastTransitionTime":"2026-03-21T09:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:12 crc kubenswrapper[4932]: E0321 09:01:12.197749 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:12Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.204038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.204096 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.204111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.204133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.204150 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:12Z","lastTransitionTime":"2026-03-21T09:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:12 crc kubenswrapper[4932]: E0321 09:01:12.219738 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:12Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.224619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.224698 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.224721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.224756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.224779 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:12Z","lastTransitionTime":"2026-03-21T09:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:12 crc kubenswrapper[4932]: E0321 09:01:12.241535 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:12Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.246890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.246969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.246998 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.247032 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.247097 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:12Z","lastTransitionTime":"2026-03-21T09:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:12 crc kubenswrapper[4932]: E0321 09:01:12.268648 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:12Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.273866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.273944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.273967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.274002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.274033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:12Z","lastTransitionTime":"2026-03-21T09:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:12 crc kubenswrapper[4932]: E0321 09:01:12.295856 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:12Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:12 crc kubenswrapper[4932]: E0321 09:01:12.296019 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:01:12 crc kubenswrapper[4932]: I0321 09:01:12.702180 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:12 crc kubenswrapper[4932]: E0321 09:01:12.702389 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:13 crc kubenswrapper[4932]: E0321 09:01:13.170250 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:13 crc kubenswrapper[4932]: I0321 09:01:13.702051 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:13 crc kubenswrapper[4932]: I0321 09:01:13.702123 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:13 crc kubenswrapper[4932]: E0321 09:01:13.702255 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:13 crc kubenswrapper[4932]: I0321 09:01:13.702401 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:13 crc kubenswrapper[4932]: E0321 09:01:13.702618 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:13 crc kubenswrapper[4932]: E0321 09:01:13.702707 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:14 crc kubenswrapper[4932]: I0321 09:01:14.701611 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:14 crc kubenswrapper[4932]: E0321 09:01:14.701836 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:15 crc kubenswrapper[4932]: I0321 09:01:15.702362 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:15 crc kubenswrapper[4932]: E0321 09:01:15.702567 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:15 crc kubenswrapper[4932]: I0321 09:01:15.702377 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:15 crc kubenswrapper[4932]: E0321 09:01:15.702831 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:15 crc kubenswrapper[4932]: I0321 09:01:15.702895 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:15 crc kubenswrapper[4932]: E0321 09:01:15.702976 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:16 crc kubenswrapper[4932]: I0321 09:01:16.702008 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:16 crc kubenswrapper[4932]: E0321 09:01:16.702202 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.382536 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:17 crc kubenswrapper[4932]: E0321 09:01:17.382755 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:01:17 crc kubenswrapper[4932]: E0321 09:01:17.382901 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:02:21.38287482 +0000 UTC m=+244.978073089 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.701759 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.701878 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:17 crc kubenswrapper[4932]: E0321 09:01:17.701921 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:17 crc kubenswrapper[4932]: E0321 09:01:17.702222 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.702381 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:17 crc kubenswrapper[4932]: E0321 09:01:17.702560 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.742740 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.758919 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.775840 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.793187 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.809523 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.828856 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.845477 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.862727 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.875485 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.889495 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.904302 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.920016 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.935120 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.947178 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.961084 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:17 crc kubenswrapper[4932]: I0321 09:01:17.982812 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:18 crc kubenswrapper[4932]: I0321 09:01:18.002925 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:17Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:18 crc kubenswrapper[4932]: I0321 09:01:18.027702 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:18Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:18 crc kubenswrapper[4932]: I0321 09:01:18.053386 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:18Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:18 crc kubenswrapper[4932]: E0321 09:01:18.171096 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:18 crc kubenswrapper[4932]: I0321 09:01:18.701981 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:18 crc kubenswrapper[4932]: E0321 09:01:18.702213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.594134 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/0.log" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.594191 4932 generic.go:334] "Generic (PLEG): container finished" podID="a038ce15-d375-452d-b38f-6893df65dee4" containerID="41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9" exitCode=1 Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.594260 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jmd8j" event={"ID":"a038ce15-d375-452d-b38f-6893df65dee4","Type":"ContainerDied","Data":"41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9"} Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.595122 4932 scope.go:117] "RemoveContainer" containerID="41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.616884 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.635878 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.649500 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.664127 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.684727 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.701613 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.701629 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:19 crc kubenswrapper[4932]: E0321 09:01:19.701750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.701853 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:19 crc kubenswrapper[4932]: E0321 09:01:19.701918 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:19 crc kubenswrapper[4932]: E0321 09:01:19.704019 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.706011 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.724570 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.741336 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.760467 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.795420 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.813158 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:01:18Z\\\",\\\"message\\\":\\\"2026-03-21T09:00:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c\\\\n2026-03-21T09:00:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c to /host/opt/cni/bin/\\\\n2026-03-21T09:00:33Z [verbose] multus-daemon started\\\\n2026-03-21T09:00:33Z [verbose] Readiness Indicator file check\\\\n2026-03-21T09:01:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.829181 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.844281 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.859680 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.872510 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.884959 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.899704 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.911971 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:19 crc kubenswrapper[4932]: I0321 09:01:19.922370 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:19Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.601182 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/0.log" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.601628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jmd8j" event={"ID":"a038ce15-d375-452d-b38f-6893df65dee4","Type":"ContainerStarted","Data":"8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a"} Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.621505 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.646035 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.666767 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.684054 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.701993 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:20 crc kubenswrapper[4932]: E0321 09:01:20.702148 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.703319 4932 scope.go:117] "RemoveContainer" containerID="3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.710948 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.726579 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.747105 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.766585 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.784047 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.800379 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.816413 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.830316 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.848018 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.864377 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.890909 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.909531 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.936283 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.956797 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:01:18Z\\\",\\\"message\\\":\\\"2026-03-21T09:00:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c\\\\n2026-03-21T09:00:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c to /host/opt/cni/bin/\\\\n2026-03-21T09:00:33Z [verbose] multus-daemon started\\\\n2026-03-21T09:00:33Z [verbose] Readiness Indicator file check\\\\n2026-03-21T09:01:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:20 crc kubenswrapper[4932]: I0321 09:01:20.971550 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:20Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.609153 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/3.log" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.609948 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/2.log" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.613436 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" exitCode=1 Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.613489 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.613590 4932 scope.go:117] "RemoveContainer" containerID="3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.614471 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:01:21 crc kubenswrapper[4932]: E0321 09:01:21.614702 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.631133 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.645123 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.659425 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.671006 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.685370 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.699751 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.701982 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:21 crc kubenswrapper[4932]: E0321 09:01:21.702099 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.701982 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.702172 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:21 crc kubenswrapper[4932]: E0321 09:01:21.702232 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:21 crc kubenswrapper[4932]: E0321 09:01:21.702834 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.724933 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:01:21Z\\\",\\\"message\\\":\\\"on-m4n7b after 0 failed attempt(s)\\\\nI0321 09:01:21.547473 7591 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0321 09:01:21.547471 7591 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-5wwpb\\\\nI0321 09:01:21.547481 7591 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nF0321 09:01:21.547482 7591 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z]\\\\nI0321 09:01:21.547495 7591 obj_retry.go:365] Adding new object: *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.739711 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.749716 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.760709 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:01:18Z\\\",\\\"message\\\":\\\"2026-03-21T09:00:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c\\\\n2026-03-21T09:00:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c to /host/opt/cni/bin/\\\\n2026-03-21T09:00:33Z [verbose] multus-daemon started\\\\n2026-03-21T09:00:33Z [verbose] Readiness Indicator file check\\\\n2026-03-21T09:01:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.772049 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.789883 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.804130 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.818279 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.831471 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.843969 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.856860 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.868727 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:21 crc kubenswrapper[4932]: I0321 09:01:21.887079 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.448448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.448500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.448511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.448530 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.448542 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:22Z","lastTransitionTime":"2026-03-21T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:22 crc kubenswrapper[4932]: E0321 09:01:22.472384 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:22Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.478168 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.478341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.478445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.478530 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.478561 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:22Z","lastTransitionTime":"2026-03-21T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:22 crc kubenswrapper[4932]: E0321 09:01:22.501824 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:22Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.507798 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.507856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.507870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.507891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.507904 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:22Z","lastTransitionTime":"2026-03-21T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:22 crc kubenswrapper[4932]: E0321 09:01:22.524789 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:22Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.531280 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.531338 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.531377 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.531400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.531415 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:22Z","lastTransitionTime":"2026-03-21T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:22 crc kubenswrapper[4932]: E0321 09:01:22.553653 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:22Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.559430 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.559484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.559497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.559515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.559529 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:22Z","lastTransitionTime":"2026-03-21T09:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:22 crc kubenswrapper[4932]: E0321 09:01:22.578846 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:22Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:22 crc kubenswrapper[4932]: E0321 09:01:22.578997 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.619581 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/3.log" Mar 21 09:01:22 crc kubenswrapper[4932]: I0321 09:01:22.701586 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:22 crc kubenswrapper[4932]: E0321 09:01:22.702292 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:23 crc kubenswrapper[4932]: E0321 09:01:23.172840 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:23 crc kubenswrapper[4932]: I0321 09:01:23.702219 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:23 crc kubenswrapper[4932]: I0321 09:01:23.702219 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:23 crc kubenswrapper[4932]: I0321 09:01:23.702615 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:23 crc kubenswrapper[4932]: E0321 09:01:23.702540 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:23 crc kubenswrapper[4932]: E0321 09:01:23.702774 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:23 crc kubenswrapper[4932]: E0321 09:01:23.702970 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:24 crc kubenswrapper[4932]: I0321 09:01:24.702222 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:24 crc kubenswrapper[4932]: E0321 09:01:24.702971 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:25 crc kubenswrapper[4932]: I0321 09:01:25.702465 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:25 crc kubenswrapper[4932]: I0321 09:01:25.702577 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:25 crc kubenswrapper[4932]: I0321 09:01:25.702718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:25 crc kubenswrapper[4932]: E0321 09:01:25.702891 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:25 crc kubenswrapper[4932]: E0321 09:01:25.703238 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:25 crc kubenswrapper[4932]: E0321 09:01:25.703479 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:26 crc kubenswrapper[4932]: I0321 09:01:26.701398 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:26 crc kubenswrapper[4932]: E0321 09:01:26.701687 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.702142 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.702202 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.702217 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:27 crc kubenswrapper[4932]: E0321 09:01:27.702319 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:27 crc kubenswrapper[4932]: E0321 09:01:27.702577 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:27 crc kubenswrapper[4932]: E0321 09:01:27.702689 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.718563 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.734307 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.754163 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.771331 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.805555 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a87c88c333e3443ed9c42c1630a633e69bde09311dec7950f1f8874d96143f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:00:50Z\\\",\\\"message\\\":\\\"7251 handler.go:208] Removed *v1.Node event handler 2\\\\nI0321 09:00:50.672727 7251 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0321 09:00:50.672736 7251 handler.go:208] Removed *v1.Node event handler 7\\\\nI0321 09:00:50.672799 7251 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0321 09:00:50.672836 7251 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0321 09:00:50.672880 7251 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0321 09:00:50.672889 7251 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0321 09:00:50.672948 7251 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0321 09:00:50.672974 7251 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0321 09:00:50.672979 7251 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0321 09:00:50.672995 7251 factory.go:656] Stopping watch factory\\\\nI0321 09:00:50.673010 7251 ovnkube.go:599] Stopped ovnkube\\\\nI0321 09:00:50.673033 7251 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0321 09:00:50.673048 7251 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0321 09:00:50.673050 7251 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0321 09:00:50.673068 7251 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0321 09:00:50.673169 7251 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:01:21Z\\\",\\\"message\\\":\\\"on-m4n7b after 0 failed attempt(s)\\\\nI0321 09:01:21.547473 7591 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0321 09:01:21.547471 7591 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-5wwpb\\\\nI0321 09:01:21.547481 7591 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nF0321 09:01:21.547482 7591 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z]\\\\nI0321 09:01:21.547495 7591 obj_retry.go:365] Adding new object: *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:01:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.826580 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.849822 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.866083 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:01:18Z\\\",\\\"message\\\":\\\"2026-03-21T09:00:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c\\\\n2026-03-21T09:00:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c to /host/opt/cni/bin/\\\\n2026-03-21T09:00:33Z [verbose] multus-daemon started\\\\n2026-03-21T09:00:33Z [verbose] Readiness Indicator file check\\\\n2026-03-21T09:01:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.877309 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.894891 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.910439 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.924412 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.941404 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.954112 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.967725 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.982155 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:27 crc kubenswrapper[4932]: I0321 09:01:27.997230 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:27Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:28 crc kubenswrapper[4932]: I0321 09:01:28.014140 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:28 crc kubenswrapper[4932]: I0321 09:01:28.030556 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:28Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:28 crc kubenswrapper[4932]: E0321 09:01:28.174139 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:28 crc kubenswrapper[4932]: I0321 09:01:28.701637 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:28 crc kubenswrapper[4932]: E0321 09:01:28.701927 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:29 crc kubenswrapper[4932]: I0321 09:01:29.702092 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:29 crc kubenswrapper[4932]: I0321 09:01:29.702117 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:29 crc kubenswrapper[4932]: I0321 09:01:29.702163 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:29 crc kubenswrapper[4932]: E0321 09:01:29.702773 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:29 crc kubenswrapper[4932]: E0321 09:01:29.702831 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:29 crc kubenswrapper[4932]: E0321 09:01:29.702625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.198849 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.200459 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:01:30 crc kubenswrapper[4932]: E0321 09:01:30.200955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.221996 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b873b9b8eaee159effedb863cedf5d54ee26023335360d4d10650bcf80ffc590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.259304 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.279475 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8044dc63-0327-41d4-93fe-af2287271a84\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d679fb7cfa41014da3f11e47f10b55588af70595c1c7e89a677bc9c5fd71a3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chfd4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4n7b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.315683 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96df7c54-2644-44b4-bcd7-13b82db2ea5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:01:21Z\\\",\\\"message\\\":\\\"on-m4n7b after 0 failed attempt(s)\\\\nI0321 09:01:21.547473 7591 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0321 09:01:21.547471 7591 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-5wwpb\\\\nI0321 09:01:21.547481 7591 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nF0321 09:01:21.547482 7591 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:21Z is after 2025-08-24T17:21:41Z]\\\\nI0321 09:01:21.547495 7591 obj_retry.go:365] Adding new object: *v1.Pod ope\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:01:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dtpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2zqsw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.341123 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"215b5025-0486-4911-bfbf-25b367a897df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfecf2d6a9d04087ab71e6612f96c5332a0f9a225f24c8efce709b118faf70c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a041f746ce26036bb5f7704b023ca9ee2acbddcece17e0a020beb0167f6eb79c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acb1321d0ce497793747b64bbdd270163c5cff196cb8acdfb6cbcc8c75f03cc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe17e75ec41fa6c8dfa8e554d7d150cba4cd444425f939428677b3d747d25e02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://918a4cb9423e43f564e8167b395ca8873b2ac1807f21a9f864b216e65e197ccd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b940fc4fbc46945c0098912542734da4bcd9356ac14ba3091d58361e5a0d9975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8248b92b468e700ec4f71257a6bb2a4213641a03446db7333618fe979232c58d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T09:00:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb7cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r8kxd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.356969 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75658228-f12b-4336-91b1-a2b5a1afab46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caf5c79b52c74edf5c9e59be6fae41a25a4c6b83af7cb81bc35e382ff0ae2446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5651cdc8f844d5f1648ba05e1c3ed2f4eaadda85f34a3c6498395230a7943fa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f08c0dd0b0fbabeef4c69375f0efe43d4d59d26a9e9ff82493ad07f9efc8b1c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e6d4e2feda599c798d4780a2df5bca9bbb57fdf5fc04688ac502d74fe365a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.370156 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19ffd9ac-c758-4f3a-b59d-5ad8c8e488d3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adbc2677ff6470372684bdac886f31a06be3069aac886512e133d9ede4f3d7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8adbc83ecc37d66010214b5c63c93f340315cc6cf772c0e01406378f8846ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.391262 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0393303b82b823f0492095152ef437b13ad27dfbcf975e5f724ec9b8e8a90f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.407916 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.432246 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"615f72d3-bf7e-430a-a9b9-94d24f926791\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d5c154ffdcad9c0c3a7e2e77ab46197ac73a3863a8633e05f85d4b09d5357bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a787740392a7da24f62b7576368196f423faf089eb7890ba787e59ace4039212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4795c323c483ec14dce5933e4e291e91917f02c25a6ada0e2a6785eb0833fc16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49903cdf7aa9ce1720de681328b37e8733117962e5f29a86477451a23b6a0a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1807aa33ba19edb3e514939c50c46b3b3994b08d9d1bbd3c66f7a5c66f54c7e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4abde63a77731517252ea490e3fc63467f865ccf00165414b14f866527aa22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://427d19bb3f326661fe4911b042fc5a327629cf90f3ae9069527e6a14fea022d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e3467cfdbff4b8333dfd1449538e388dd2083879045d493a87fd29095b5d88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.449617 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jmd8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a038ce15-d375-452d-b38f-6893df65dee4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-21T09:01:18Z\\\",\\\"message\\\":\\\"2026-03-21T09:00:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c\\\\n2026-03-21T09:00:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b54054b5-96f4-4084-96b6-753875ac5c3c to /host/opt/cni/bin/\\\\n2026-03-21T09:00:33Z [verbose] multus-daemon started\\\\n2026-03-21T09:00:33Z [verbose] Readiness Indicator file check\\\\n2026-03-21T09:01:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T09:00:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:01:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58z89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jmd8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.465845 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-svc74" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0aa0d22-0af0-4d4e-8a8a-4da45fc7dccb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae8ab18f04c69f72fc63335439e22bbbae9f034fc3116383969f9500dca70fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8bgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-svc74\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.480230 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5wwpb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f312294e-78f4-44ca-8dee-96797a8b9205\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3036b9ad98c0889fa9532b7a8c9b5945ac9dcca9375b3c9d390ca1b214ebbdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w6tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:59:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5wwpb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.498076 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f066ce-1e24-4e33-8d78-8a5187647c1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94baf347735e92772e8010d088088c4cae7a6c333e0a9919d3b8646002a3508d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2e19a95cdda713b5239bfb66e121b760d6fb1cb55ee65f40df5867f78fe3bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4k5hd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hmlw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.513784 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb0a5470-935a-4f5a-9a19-f261a853a79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lc64w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T09:00:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cpgnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.536458 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b82a80-f33d-47fe-8bab-21cd111dba94\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:59:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0321 08:59:17.308333 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0321 08:59:17.308501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0321 08:59:17.309284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1935671779/tls.crt::/tmp/serving-cert-1935671779/tls.key\\\\\\\"\\\\nI0321 08:59:17.756119 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0321 08:59:17.758899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0321 08:59:17.758918 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0321 08:59:17.758945 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0321 08:59:17.758954 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0321 08:59:17.766299 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0321 08:59:17.766339 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0321 08:59:17.766380 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766430 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0321 08:59:17.766452 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0321 08:59:17.766471 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0321 08:59:17.766491 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0321 08:59:17.766510 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0321 08:59:17.769273 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:59:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T08:58:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.552028 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd0e9f77-0e0d-49ec-b758-1c209843c4ff\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T08:58:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86b94f1e6da1da66172ca773494ea5322aa556f97d0cb19140dab91690d3777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92b6e70e0840c618b8438c027be5d4757b55e7c852ed7af816defba5c79aafbb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-21T08:58:46Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0321 08:58:22.103507 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0321 08:58:22.114527 1 observer_polling.go:159] Starting file observer\\\\nI0321 08:58:22.226263 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0321 08:58:22.230896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0321 08:58:46.672848 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0321 08:58:46.673007 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T08:58:46Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0e7ae47dfdf7786fba7475e2d762059d9ace8a9f228a82a98a6af39940f627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c45c1478bd1388d79a562e73307dc70b4e8fb3afce19d0c57c7d7804ef70d47a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T08:58:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T08:58:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.566448 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.585755 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T08:59:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T09:00:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca06fd9c262d6409cf6d2b57531931fadd609d4f794b60fd0b62b7155d5be2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://546fa46cbfd4b71735b125be09dd818fb874710ffab33b6037ee0ed93c87d6ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T09:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:30Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:30 crc kubenswrapper[4932]: I0321 09:01:30.702013 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:30 crc kubenswrapper[4932]: E0321 09:01:30.702262 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:31 crc kubenswrapper[4932]: I0321 09:01:31.701968 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:31 crc kubenswrapper[4932]: I0321 09:01:31.702112 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:31 crc kubenswrapper[4932]: I0321 09:01:31.702341 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:31 crc kubenswrapper[4932]: E0321 09:01:31.702558 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:31 crc kubenswrapper[4932]: E0321 09:01:31.702728 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:31 crc kubenswrapper[4932]: E0321 09:01:31.703029 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.610723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.610782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.610793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.610815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.610829 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:32Z","lastTransitionTime":"2026-03-21T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:32 crc kubenswrapper[4932]: E0321 09:01:32.631863 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.638088 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.638180 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.638201 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.638232 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.638252 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:32Z","lastTransitionTime":"2026-03-21T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:32 crc kubenswrapper[4932]: E0321 09:01:32.663324 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.670331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.670429 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.670448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.670481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.670506 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:32Z","lastTransitionTime":"2026-03-21T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:32 crc kubenswrapper[4932]: E0321 09:01:32.690921 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.696676 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.696773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.696795 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.696851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.696877 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:32Z","lastTransitionTime":"2026-03-21T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.701918 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:32 crc kubenswrapper[4932]: E0321 09:01:32.702112 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:32 crc kubenswrapper[4932]: E0321 09:01:32.714944 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.719401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.719457 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.719468 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.719492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:32 crc kubenswrapper[4932]: I0321 09:01:32.719506 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:32Z","lastTransitionTime":"2026-03-21T09:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:32 crc kubenswrapper[4932]: E0321 09:01:32.735298 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T09:01:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9976af6a-168a-4223-a21b-ff86966c37d0\\\",\\\"systemUUID\\\":\\\"33eb7fa8-29e7-42f9-b8ab-2dc48630a90a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T09:01:32Z is after 2025-08-24T17:21:41Z" Mar 21 09:01:32 crc kubenswrapper[4932]: E0321 09:01:32.735624 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 09:01:33 crc kubenswrapper[4932]: E0321 09:01:33.176083 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:33 crc kubenswrapper[4932]: I0321 09:01:33.702093 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:33 crc kubenswrapper[4932]: I0321 09:01:33.702138 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:33 crc kubenswrapper[4932]: I0321 09:01:33.702253 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:33 crc kubenswrapper[4932]: E0321 09:01:33.702406 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:33 crc kubenswrapper[4932]: E0321 09:01:33.702454 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:33 crc kubenswrapper[4932]: E0321 09:01:33.702532 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:34 crc kubenswrapper[4932]: I0321 09:01:34.701701 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:34 crc kubenswrapper[4932]: E0321 09:01:34.701896 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:35 crc kubenswrapper[4932]: I0321 09:01:35.702139 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:35 crc kubenswrapper[4932]: I0321 09:01:35.702456 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:35 crc kubenswrapper[4932]: I0321 09:01:35.702732 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:35 crc kubenswrapper[4932]: E0321 09:01:35.702845 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:35 crc kubenswrapper[4932]: E0321 09:01:35.702944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:35 crc kubenswrapper[4932]: E0321 09:01:35.703048 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:36 crc kubenswrapper[4932]: I0321 09:01:36.701894 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:36 crc kubenswrapper[4932]: E0321 09:01:36.702079 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.701450 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.701567 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:37 crc kubenswrapper[4932]: E0321 09:01:37.701668 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.701724 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:37 crc kubenswrapper[4932]: E0321 09:01:37.701893 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:37 crc kubenswrapper[4932]: E0321 09:01:37.702030 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.792302 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hmlw7" podStartSLOduration=143.79227948 podStartE2EDuration="2m23.79227948s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:37.772981673 +0000 UTC m=+201.368179962" watchObservedRunningTime="2026-03-21 09:01:37.79227948 +0000 UTC m=+201.387477749" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.828013 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.827993984 podStartE2EDuration="1m16.827993984s" podCreationTimestamp="2026-03-21 09:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:37.827911841 +0000 UTC m=+201.423110120" watchObservedRunningTime="2026-03-21 09:01:37.827993984 +0000 UTC m=+201.423192253" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.828219 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=120.828213191 podStartE2EDuration="2m0.828213191s" podCreationTimestamp="2026-03-21 08:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:37.812232848 +0000 UTC m=+201.407431127" watchObservedRunningTime="2026-03-21 09:01:37.828213191 +0000 UTC m=+201.423411460" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.872084 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5wwpb" podStartSLOduration=144.872056859 podStartE2EDuration="2m24.872056859s" podCreationTimestamp="2026-03-21 08:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:37.867811776 +0000 UTC m=+201.463010065" watchObservedRunningTime="2026-03-21 09:01:37.872056859 +0000 UTC m=+201.467255128" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.916192 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podStartSLOduration=143.916170408 podStartE2EDuration="2m23.916170408s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:37.916152687 +0000 UTC m=+201.511350966" watchObservedRunningTime="2026-03-21 09:01:37.916170408 +0000 UTC m=+201.511368677" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.948714 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.948691341 podStartE2EDuration="1m0.948691341s" podCreationTimestamp="2026-03-21 09:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:37.948292408 +0000 UTC m=+201.543490697" watchObservedRunningTime="2026-03-21 09:01:37.948691341 +0000 UTC m=+201.543889610" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.949019 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-r8kxd" podStartSLOduration=143.949010171 podStartE2EDuration="2m23.949010171s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:37.932742099 +0000 UTC m=+201.527940368" watchObservedRunningTime="2026-03-21 09:01:37.949010171 +0000 UTC m=+201.544208440" Mar 21 09:01:37 crc kubenswrapper[4932]: I0321 09:01:37.968407 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=118.96837872 podStartE2EDuration="1m58.96837872s" podCreationTimestamp="2026-03-21 08:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:37.967999298 +0000 UTC m=+201.563197567" watchObservedRunningTime="2026-03-21 09:01:37.96837872 +0000 UTC m=+201.563576999" Mar 21 09:01:38 crc kubenswrapper[4932]: I0321 09:01:38.058781 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=89.058752483 podStartE2EDuration="1m29.058752483s" podCreationTimestamp="2026-03-21 09:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:38.056713408 +0000 UTC m=+201.651911677" watchObservedRunningTime="2026-03-21 09:01:38.058752483 +0000 UTC m=+201.653950782" Mar 21 09:01:38 crc kubenswrapper[4932]: I0321 09:01:38.070574 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jmd8j" podStartSLOduration=144.070553474 podStartE2EDuration="2m24.070553474s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:38.070367958 +0000 UTC m=+201.665566247" watchObservedRunningTime="2026-03-21 09:01:38.070553474 +0000 UTC m=+201.665751743" Mar 21 09:01:38 crc kubenswrapper[4932]: I0321 09:01:38.085337 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-svc74" podStartSLOduration=144.085315708 podStartE2EDuration="2m24.085315708s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:38.084653957 +0000 UTC m=+201.679852256" watchObservedRunningTime="2026-03-21 09:01:38.085315708 +0000 UTC m=+201.680514017" Mar 21 09:01:38 crc kubenswrapper[4932]: E0321 09:01:38.176887 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:38 crc kubenswrapper[4932]: I0321 09:01:38.701997 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:38 crc kubenswrapper[4932]: E0321 09:01:38.702172 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:39 crc kubenswrapper[4932]: I0321 09:01:39.701887 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:39 crc kubenswrapper[4932]: I0321 09:01:39.702038 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:39 crc kubenswrapper[4932]: I0321 09:01:39.701887 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:39 crc kubenswrapper[4932]: E0321 09:01:39.702142 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:39 crc kubenswrapper[4932]: E0321 09:01:39.702656 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:39 crc kubenswrapper[4932]: E0321 09:01:39.702787 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:40 crc kubenswrapper[4932]: I0321 09:01:40.701642 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:40 crc kubenswrapper[4932]: E0321 09:01:40.701823 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.692657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.692785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.692863 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.692984 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.693084 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:03:43.693026181 +0000 UTC m=+327.288224490 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.693122 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.693186 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 09:03:43.693160065 +0000 UTC m=+327.288358594 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.693298 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 09:03:43.693264729 +0000 UTC m=+327.288463038 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.701997 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.702100 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.702230 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.702704 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.702766 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.703225 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.703828 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.704122 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.794074 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:41 crc kubenswrapper[4932]: I0321 09:01:41.794165 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.794336 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.794396 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.794412 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.794492 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 09:03:43.794468773 +0000 UTC m=+327.389667062 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.794491 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.794583 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.794605 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:01:41 crc kubenswrapper[4932]: E0321 09:01:41.794691 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 09:03:43.794670109 +0000 UTC m=+327.389868398 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.702155 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:42 crc kubenswrapper[4932]: E0321 09:01:42.702420 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.899002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.899302 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.899319 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.899337 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.899375 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T09:01:42Z","lastTransitionTime":"2026-03-21T09:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.946257 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj"] Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.946927 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.949198 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.949436 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.950373 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 21 09:01:42 crc kubenswrapper[4932]: I0321 09:01:42.951491 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.009778 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.009938 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.009983 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.010153 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.010257 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.111837 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.111949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.112029 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.112086 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.112110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.112188 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.112313 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.113543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.138227 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.148620 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l4tcj\" (UID: \"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: E0321 09:01:43.179136 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.273554 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.701455 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.701520 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:43 crc kubenswrapper[4932]: E0321 09:01:43.701680 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.701745 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:43 crc kubenswrapper[4932]: E0321 09:01:43.701952 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:43 crc kubenswrapper[4932]: E0321 09:01:43.702006 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.715500 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" event={"ID":"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9","Type":"ContainerStarted","Data":"6586d192036dcfaa844fb724883f9c0660f663256f5758bb64411fd62435edb3"} Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.715593 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" event={"ID":"69008f0a-9d61-4aa4-9a79-3ab85bf0e0a9","Type":"ContainerStarted","Data":"1dd3e91326c5826ba7224e396577e9ea5ab67826606020afde7065c61efa391f"} Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.740091 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l4tcj" podStartSLOduration=149.740065578 podStartE2EDuration="2m29.740065578s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:01:43.738099787 +0000 UTC m=+207.333298126" watchObservedRunningTime="2026-03-21 09:01:43.740065578 +0000 UTC m=+207.335263887" Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.751627 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 21 09:01:43 crc kubenswrapper[4932]: I0321 09:01:43.762874 4932 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 21 09:01:44 crc kubenswrapper[4932]: I0321 09:01:44.702282 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:44 crc kubenswrapper[4932]: E0321 09:01:44.702509 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:45 crc kubenswrapper[4932]: I0321 09:01:45.701840 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:45 crc kubenswrapper[4932]: I0321 09:01:45.701870 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:45 crc kubenswrapper[4932]: E0321 09:01:45.702058 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:45 crc kubenswrapper[4932]: I0321 09:01:45.702129 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:45 crc kubenswrapper[4932]: E0321 09:01:45.702223 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:45 crc kubenswrapper[4932]: E0321 09:01:45.702362 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:46 crc kubenswrapper[4932]: I0321 09:01:46.702109 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:46 crc kubenswrapper[4932]: E0321 09:01:46.702290 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:47 crc kubenswrapper[4932]: I0321 09:01:47.702630 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:47 crc kubenswrapper[4932]: I0321 09:01:47.704751 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:47 crc kubenswrapper[4932]: I0321 09:01:47.704757 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:47 crc kubenswrapper[4932]: E0321 09:01:47.704914 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:47 crc kubenswrapper[4932]: E0321 09:01:47.705227 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:47 crc kubenswrapper[4932]: E0321 09:01:47.704608 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:48 crc kubenswrapper[4932]: E0321 09:01:48.179770 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:48 crc kubenswrapper[4932]: I0321 09:01:48.701758 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:48 crc kubenswrapper[4932]: E0321 09:01:48.702049 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:49 crc kubenswrapper[4932]: I0321 09:01:49.701924 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:49 crc kubenswrapper[4932]: I0321 09:01:49.701975 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:49 crc kubenswrapper[4932]: E0321 09:01:49.702488 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:49 crc kubenswrapper[4932]: I0321 09:01:49.702029 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:49 crc kubenswrapper[4932]: E0321 09:01:49.702618 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:49 crc kubenswrapper[4932]: E0321 09:01:49.702805 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:50 crc kubenswrapper[4932]: I0321 09:01:50.701863 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:50 crc kubenswrapper[4932]: E0321 09:01:50.702114 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:51 crc kubenswrapper[4932]: I0321 09:01:51.701806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:51 crc kubenswrapper[4932]: I0321 09:01:51.701863 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:51 crc kubenswrapper[4932]: I0321 09:01:51.702050 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:51 crc kubenswrapper[4932]: E0321 09:01:51.702068 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:51 crc kubenswrapper[4932]: E0321 09:01:51.702384 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:51 crc kubenswrapper[4932]: E0321 09:01:51.702478 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:52 crc kubenswrapper[4932]: I0321 09:01:52.702250 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:52 crc kubenswrapper[4932]: E0321 09:01:52.702543 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:53 crc kubenswrapper[4932]: E0321 09:01:53.181672 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:53 crc kubenswrapper[4932]: I0321 09:01:53.701888 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:53 crc kubenswrapper[4932]: E0321 09:01:53.702131 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:53 crc kubenswrapper[4932]: I0321 09:01:53.702502 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:53 crc kubenswrapper[4932]: E0321 09:01:53.702621 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:53 crc kubenswrapper[4932]: I0321 09:01:53.702793 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:53 crc kubenswrapper[4932]: E0321 09:01:53.703328 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:53 crc kubenswrapper[4932]: I0321 09:01:53.703761 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:01:53 crc kubenswrapper[4932]: E0321 09:01:53.703964 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2zqsw_openshift-ovn-kubernetes(96df7c54-2644-44b4-bcd7-13b82db2ea5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" Mar 21 09:01:54 crc kubenswrapper[4932]: I0321 09:01:54.701968 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:54 crc kubenswrapper[4932]: E0321 09:01:54.702185 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:55 crc kubenswrapper[4932]: I0321 09:01:55.704716 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:55 crc kubenswrapper[4932]: I0321 09:01:55.704716 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:55 crc kubenswrapper[4932]: I0321 09:01:55.704952 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:55 crc kubenswrapper[4932]: E0321 09:01:55.705197 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:55 crc kubenswrapper[4932]: E0321 09:01:55.705384 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:55 crc kubenswrapper[4932]: E0321 09:01:55.705549 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:56 crc kubenswrapper[4932]: I0321 09:01:56.701612 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:56 crc kubenswrapper[4932]: E0321 09:01:56.701812 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:57 crc kubenswrapper[4932]: I0321 09:01:57.701525 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:57 crc kubenswrapper[4932]: I0321 09:01:57.701525 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:57 crc kubenswrapper[4932]: I0321 09:01:57.702536 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:57 crc kubenswrapper[4932]: E0321 09:01:57.702524 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:01:57 crc kubenswrapper[4932]: E0321 09:01:57.702743 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:57 crc kubenswrapper[4932]: E0321 09:01:57.702928 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:58 crc kubenswrapper[4932]: E0321 09:01:58.182414 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:01:58 crc kubenswrapper[4932]: I0321 09:01:58.701623 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:01:58 crc kubenswrapper[4932]: E0321 09:01:58.701874 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:01:59 crc kubenswrapper[4932]: I0321 09:01:59.702424 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:01:59 crc kubenswrapper[4932]: I0321 09:01:59.702487 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:01:59 crc kubenswrapper[4932]: I0321 09:01:59.702649 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:01:59 crc kubenswrapper[4932]: E0321 09:01:59.703375 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:01:59 crc kubenswrapper[4932]: E0321 09:01:59.703300 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:01:59 crc kubenswrapper[4932]: E0321 09:01:59.703557 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:00 crc kubenswrapper[4932]: I0321 09:02:00.702133 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:00 crc kubenswrapper[4932]: E0321 09:02:00.702496 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:01 crc kubenswrapper[4932]: I0321 09:02:01.701601 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:01 crc kubenswrapper[4932]: I0321 09:02:01.701655 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:01 crc kubenswrapper[4932]: I0321 09:02:01.701748 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:01 crc kubenswrapper[4932]: E0321 09:02:01.701802 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:01 crc kubenswrapper[4932]: E0321 09:02:01.701986 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:01 crc kubenswrapper[4932]: E0321 09:02:01.702166 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:02 crc kubenswrapper[4932]: I0321 09:02:02.701898 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:02 crc kubenswrapper[4932]: E0321 09:02:02.702105 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:03 crc kubenswrapper[4932]: E0321 09:02:03.184266 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:02:03 crc kubenswrapper[4932]: I0321 09:02:03.702404 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:03 crc kubenswrapper[4932]: I0321 09:02:03.702407 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:03 crc kubenswrapper[4932]: I0321 09:02:03.702406 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:03 crc kubenswrapper[4932]: E0321 09:02:03.703318 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:03 crc kubenswrapper[4932]: E0321 09:02:03.703339 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:03 crc kubenswrapper[4932]: E0321 09:02:03.703469 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:04 crc kubenswrapper[4932]: I0321 09:02:04.701378 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:04 crc kubenswrapper[4932]: E0321 09:02:04.701559 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.702340 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.702451 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.702343 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:05 crc kubenswrapper[4932]: E0321 09:02:05.702633 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:05 crc kubenswrapper[4932]: E0321 09:02:05.702791 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:05 crc kubenswrapper[4932]: E0321 09:02:05.703035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.811102 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/1.log" Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.812539 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/0.log" Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.812804 4932 generic.go:334] "Generic (PLEG): container finished" podID="a038ce15-d375-452d-b38f-6893df65dee4" containerID="8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a" exitCode=1 Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.812905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jmd8j" event={"ID":"a038ce15-d375-452d-b38f-6893df65dee4","Type":"ContainerDied","Data":"8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a"} Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.813201 4932 scope.go:117] "RemoveContainer" containerID="41c31a234b2580bd8e5e2b1b0f7ebf27b21afbb887c0dfade462e035e21b5ab9" Mar 21 09:02:05 crc kubenswrapper[4932]: I0321 09:02:05.813806 4932 scope.go:117] "RemoveContainer" containerID="8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a" Mar 21 09:02:05 crc kubenswrapper[4932]: E0321 09:02:05.814005 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-jmd8j_openshift-multus(a038ce15-d375-452d-b38f-6893df65dee4)\"" pod="openshift-multus/multus-jmd8j" podUID="a038ce15-d375-452d-b38f-6893df65dee4" Mar 21 09:02:06 crc kubenswrapper[4932]: I0321 09:02:06.702654 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:06 crc kubenswrapper[4932]: E0321 09:02:06.703490 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:06 crc kubenswrapper[4932]: I0321 09:02:06.818588 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/1.log" Mar 21 09:02:07 crc kubenswrapper[4932]: I0321 09:02:07.702524 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:07 crc kubenswrapper[4932]: I0321 09:02:07.702584 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:07 crc kubenswrapper[4932]: I0321 09:02:07.702584 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:07 crc kubenswrapper[4932]: E0321 09:02:07.704557 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:07 crc kubenswrapper[4932]: E0321 09:02:07.704676 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:07 crc kubenswrapper[4932]: E0321 09:02:07.704838 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:07 crc kubenswrapper[4932]: I0321 09:02:07.705756 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:02:07 crc kubenswrapper[4932]: I0321 09:02:07.825966 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/3.log" Mar 21 09:02:07 crc kubenswrapper[4932]: I0321 09:02:07.829935 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerStarted","Data":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} Mar 21 09:02:07 crc kubenswrapper[4932]: I0321 09:02:07.830424 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:02:07 crc kubenswrapper[4932]: I0321 09:02:07.883828 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podStartSLOduration=173.883807705 podStartE2EDuration="2m53.883807705s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:07.882971029 +0000 UTC m=+231.478169308" watchObservedRunningTime="2026-03-21 09:02:07.883807705 +0000 UTC m=+231.479005964" Mar 21 09:02:08 crc kubenswrapper[4932]: E0321 09:02:08.185176 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:02:08 crc kubenswrapper[4932]: I0321 09:02:08.501357 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cpgnf"] Mar 21 09:02:08 crc kubenswrapper[4932]: I0321 09:02:08.501538 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:08 crc kubenswrapper[4932]: E0321 09:02:08.501634 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:09 crc kubenswrapper[4932]: I0321 09:02:09.702003 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:09 crc kubenswrapper[4932]: I0321 09:02:09.702042 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:09 crc kubenswrapper[4932]: E0321 09:02:09.702618 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:09 crc kubenswrapper[4932]: E0321 09:02:09.702831 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:09 crc kubenswrapper[4932]: I0321 09:02:09.702136 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:09 crc kubenswrapper[4932]: E0321 09:02:09.703068 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:10 crc kubenswrapper[4932]: I0321 09:02:10.701576 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:10 crc kubenswrapper[4932]: E0321 09:02:10.701799 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:11 crc kubenswrapper[4932]: I0321 09:02:11.702075 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:11 crc kubenswrapper[4932]: I0321 09:02:11.702129 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:11 crc kubenswrapper[4932]: E0321 09:02:11.702308 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:11 crc kubenswrapper[4932]: I0321 09:02:11.702450 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:11 crc kubenswrapper[4932]: E0321 09:02:11.702546 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:11 crc kubenswrapper[4932]: E0321 09:02:11.702777 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:12 crc kubenswrapper[4932]: I0321 09:02:12.702303 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:12 crc kubenswrapper[4932]: E0321 09:02:12.702545 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:13 crc kubenswrapper[4932]: E0321 09:02:13.186517 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:02:13 crc kubenswrapper[4932]: I0321 09:02:13.701966 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:13 crc kubenswrapper[4932]: I0321 09:02:13.702071 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:13 crc kubenswrapper[4932]: E0321 09:02:13.702200 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:13 crc kubenswrapper[4932]: I0321 09:02:13.702320 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:13 crc kubenswrapper[4932]: E0321 09:02:13.702566 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:13 crc kubenswrapper[4932]: E0321 09:02:13.702634 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:14 crc kubenswrapper[4932]: I0321 09:02:14.702303 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:14 crc kubenswrapper[4932]: E0321 09:02:14.702804 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:15 crc kubenswrapper[4932]: I0321 09:02:15.701950 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:15 crc kubenswrapper[4932]: E0321 09:02:15.702185 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:15 crc kubenswrapper[4932]: I0321 09:02:15.702323 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:15 crc kubenswrapper[4932]: I0321 09:02:15.702463 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:15 crc kubenswrapper[4932]: E0321 09:02:15.702637 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:15 crc kubenswrapper[4932]: E0321 09:02:15.703209 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:16 crc kubenswrapper[4932]: I0321 09:02:16.701802 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:16 crc kubenswrapper[4932]: E0321 09:02:16.702044 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:16 crc kubenswrapper[4932]: I0321 09:02:16.702654 4932 scope.go:117] "RemoveContainer" containerID="8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a" Mar 21 09:02:16 crc kubenswrapper[4932]: I0321 09:02:16.885482 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/1.log" Mar 21 09:02:17 crc kubenswrapper[4932]: I0321 09:02:17.701897 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:17 crc kubenswrapper[4932]: I0321 09:02:17.701979 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:17 crc kubenswrapper[4932]: I0321 09:02:17.702052 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:17 crc kubenswrapper[4932]: E0321 09:02:17.705192 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:17 crc kubenswrapper[4932]: E0321 09:02:17.705659 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:17 crc kubenswrapper[4932]: E0321 09:02:17.705831 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:17 crc kubenswrapper[4932]: I0321 09:02:17.892453 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/1.log" Mar 21 09:02:17 crc kubenswrapper[4932]: I0321 09:02:17.892536 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jmd8j" event={"ID":"a038ce15-d375-452d-b38f-6893df65dee4","Type":"ContainerStarted","Data":"e15136e2e2b85a0626ee0e6b82b18a1a2feb09d9addb024999f8b5a7d32e367e"} Mar 21 09:02:18 crc kubenswrapper[4932]: E0321 09:02:18.188295 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:02:18 crc kubenswrapper[4932]: I0321 09:02:18.702184 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:18 crc kubenswrapper[4932]: E0321 09:02:18.702692 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:19 crc kubenswrapper[4932]: I0321 09:02:19.701556 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:19 crc kubenswrapper[4932]: I0321 09:02:19.701690 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:19 crc kubenswrapper[4932]: I0321 09:02:19.701567 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:19 crc kubenswrapper[4932]: E0321 09:02:19.701796 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:19 crc kubenswrapper[4932]: E0321 09:02:19.701889 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:19 crc kubenswrapper[4932]: E0321 09:02:19.701995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:20 crc kubenswrapper[4932]: I0321 09:02:20.702218 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:20 crc kubenswrapper[4932]: E0321 09:02:20.702487 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:21 crc kubenswrapper[4932]: I0321 09:02:21.424830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:21 crc kubenswrapper[4932]: E0321 09:02:21.425026 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:02:21 crc kubenswrapper[4932]: E0321 09:02:21.425140 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs podName:fb0a5470-935a-4f5a-9a19-f261a853a79c nodeName:}" failed. No retries permitted until 2026-03-21 09:04:23.425113404 +0000 UTC m=+367.020311703 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs") pod "network-metrics-daemon-cpgnf" (UID: "fb0a5470-935a-4f5a-9a19-f261a853a79c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 09:02:21 crc kubenswrapper[4932]: I0321 09:02:21.702045 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:21 crc kubenswrapper[4932]: I0321 09:02:21.702166 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:21 crc kubenswrapper[4932]: E0321 09:02:21.702395 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 09:02:21 crc kubenswrapper[4932]: I0321 09:02:21.702541 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:21 crc kubenswrapper[4932]: E0321 09:02:21.702647 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 09:02:21 crc kubenswrapper[4932]: E0321 09:02:21.702779 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 09:02:22 crc kubenswrapper[4932]: I0321 09:02:22.702430 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:22 crc kubenswrapper[4932]: E0321 09:02:22.702762 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cpgnf" podUID="fb0a5470-935a-4f5a-9a19-f261a853a79c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.554909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.602660 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xlbr5"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.607288 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.616576 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.617156 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.617216 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.617301 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bn5tw"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.620909 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.621904 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.622110 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.622243 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.622279 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.622478 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.623992 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.626644 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.646250 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.649503 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8ffkm"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.649597 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120be070-2828-4e64-ac15-e20d8eb7a59c-config\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.649652 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28d77447-b859-4037-b20b-ab6ab1de8d5f-node-pullsecrets\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.649680 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.649696 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.649930 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28d77447-b859-4037-b20b-ab6ab1de8d5f-audit-dir\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.649964 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/120be070-2828-4e64-ac15-e20d8eb7a59c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.649991 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650039 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-config\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650069 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-serving-cert\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650114 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120be070-2828-4e64-ac15-e20d8eb7a59c-images\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650162 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-audit\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650184 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-etcd-serving-ca\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650222 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-image-import-ca\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650246 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-etcd-client\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650271 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-encryption-config\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650314 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27l6v\" (UniqueName: \"kubernetes.io/projected/120be070-2828-4e64-ac15-e20d8eb7a59c-kube-api-access-27l6v\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650338 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndkpr\" (UniqueName: \"kubernetes.io/projected/28d77447-b859-4037-b20b-ab6ab1de8d5f-kube-api-access-ndkpr\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650189 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650249 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650297 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650396 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.650896 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.651608 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.653233 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.653770 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.654142 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.654779 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.656548 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.657201 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.662294 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.663553 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.663718 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.663893 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.664738 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.664780 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.664999 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.665086 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.665240 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.665256 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.665499 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.665507 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.665701 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.665938 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.669416 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.670081 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.670144 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.670651 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.670687 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.670862 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.671040 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.671242 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.671276 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.671533 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.671694 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.672282 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.672434 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.672572 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.672687 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.672801 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.673441 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.674419 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.674632 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.674763 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtbkl"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.675466 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.675938 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.676148 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2g9xk"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.676602 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2g9xk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.677851 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.678402 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.680829 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.681542 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.683054 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-v7lxk"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.683782 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.685989 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.686093 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.686337 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.686506 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.686511 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.686707 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.686795 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.686737 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.687092 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.686935 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.687272 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.691760 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.691927 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.692696 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.697594 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rqlxn"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.699202 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.699834 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.700237 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.700656 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.701305 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.701547 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.702273 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.703130 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.705880 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.708962 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.713760 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.713915 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.714171 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.714322 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.716331 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.714688 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.737589 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.737693 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bpdhb"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.737927 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.738230 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.738769 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.739135 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.739369 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.739499 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.740658 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.742620 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.742656 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.743562 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kbfjw"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.744259 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.744375 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qlsc5"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.744884 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.746439 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.747284 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.747331 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.747908 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.749143 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.749427 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.749436 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-z8gns"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.749611 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.749751 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.750087 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wt26g"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.750518 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.750823 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.750526 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.750939 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a022e41-0a7d-4ab3-bcce-c404160d00c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.750965 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrf2\" (UniqueName: \"kubernetes.io/projected/1e37527c-54d4-43e2-be4d-7fb7e98b6019-kube-api-access-cqrf2\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.750990 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/400b3f5b-c001-4186-b86b-006d3bda3396-metrics-tls\") pod \"dns-operator-744455d44c-mtbkl\" (UID: \"400b3f5b-c001-4186-b86b-006d3bda3396\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751012 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvxn\" (UniqueName: \"kubernetes.io/projected/fa70de2c-7377-4903-9a8b-f889eb315031-kube-api-access-dwvxn\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751055 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e37527c-54d4-43e2-be4d-7fb7e98b6019-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751072 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-config\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751092 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751131 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751160 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-audit\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751179 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-etcd-serving-ca\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751194 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-image-import-ca\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751209 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-serving-cert\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751225 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-etcd-client\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751242 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-encryption-config\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751260 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa70de2c-7377-4903-9a8b-f889eb315031-serving-cert\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751280 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdf4\" (UniqueName: \"kubernetes.io/projected/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-kube-api-access-lmdf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751297 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64353439-1daf-44b8-bc4d-c91b54936c30-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751317 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p2rs\" (UniqueName: \"kubernetes.io/projected/dad9bd53-9063-4f82-91e9-7c4563696223-kube-api-access-6p2rs\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751339 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27l6v\" (UniqueName: \"kubernetes.io/projected/120be070-2828-4e64-ac15-e20d8eb7a59c-kube-api-access-27l6v\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751391 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndkpr\" (UniqueName: \"kubernetes.io/projected/28d77447-b859-4037-b20b-ab6ab1de8d5f-kube-api-access-ndkpr\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751408 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-encryption-config\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751430 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-oauth-config\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751606 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.752939 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-image-import-ca\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.753446 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-audit\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.751454 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dh7j\" (UniqueName: \"kubernetes.io/projected/7a022e41-0a7d-4ab3-bcce-c404160d00c9-kube-api-access-9dh7j\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.753951 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-etcd-serving-ca\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754026 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9znwm\" (UniqueName: \"kubernetes.io/projected/64353439-1daf-44b8-bc4d-c91b54936c30-kube-api-access-9znwm\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754080 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-etcd-client\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754098 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754125 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e2f4747-5e21-4512-8ac6-e7544c5c360f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zwq9w\" (UID: \"3e2f4747-5e21-4512-8ac6-e7544c5c360f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754151 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-client-ca\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754170 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmnsk\" (UniqueName: \"kubernetes.io/projected/9c992d5e-0530-450e-a359-afe22329d324-kube-api-access-jmnsk\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754193 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754254 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-trusted-ca\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j964\" (UniqueName: \"kubernetes.io/projected/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-kube-api-access-2j964\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754290 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754294 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2953b210-7e39-430a-824e-a1bd46ecff06-auth-proxy-config\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754387 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-serving-cert\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754410 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64353439-1daf-44b8-bc4d-c91b54936c30-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754434 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754606 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754436 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120be070-2828-4e64-ac15-e20d8eb7a59c-config\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754693 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bk5j\" (UniqueName: \"kubernetes.io/projected/d869e800-3cce-4e0d-b822-158858ee632b-kube-api-access-9bk5j\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754720 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-config\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754772 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh95x\" (UniqueName: \"kubernetes.io/projected/a5e1cc78-be1f-45a2-87b3-73c62790c894-kube-api-access-lh95x\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28d77447-b859-4037-b20b-ab6ab1de8d5f-node-pullsecrets\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754840 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-config\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754859 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28d77447-b859-4037-b20b-ab6ab1de8d5f-audit-dir\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754873 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-client-ca\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.754959 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/120be070-2828-4e64-ac15-e20d8eb7a59c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755020 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755043 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8fh\" (UniqueName: \"kubernetes.io/projected/3e2f4747-5e21-4512-8ac6-e7544c5c360f-kube-api-access-gp8fh\") pod \"cluster-samples-operator-665b6dd947-zwq9w\" (UID: \"3e2f4747-5e21-4512-8ac6-e7544c5c360f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755059 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcnn\" (UniqueName: \"kubernetes.io/projected/2953b210-7e39-430a-824e-a1bd46ecff06-kube-api-access-bqcnn\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99sjz\" (UniqueName: \"kubernetes.io/projected/400b3f5b-c001-4186-b86b-006d3bda3396-kube-api-access-99sjz\") pod \"dns-operator-744455d44c-mtbkl\" (UID: \"400b3f5b-c001-4186-b86b-006d3bda3396\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755106 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755130 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-config\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755138 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28d77447-b859-4037-b20b-ab6ab1de8d5f-node-pullsecrets\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755303 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755323 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28d77447-b859-4037-b20b-ab6ab1de8d5f-audit-dir\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755387 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-serving-cert\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755469 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e37527c-54d4-43e2-be4d-7fb7e98b6019-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755515 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9c992d5e-0530-450e-a359-afe22329d324-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755533 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755555 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755591 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-config\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755620 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-service-ca-bundle\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755639 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-service-ca\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755661 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-oauth-serving-cert\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755680 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdphx\" (UniqueName: \"kubernetes.io/projected/4387c45a-d8f9-4478-b224-b4e656880aaf-kube-api-access-cdphx\") pod \"downloads-7954f5f757-2g9xk\" (UID: \"4387c45a-d8f9-4478-b224-b4e656880aaf\") " pod="openshift-console/downloads-7954f5f757-2g9xk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755703 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2953b210-7e39-430a-824e-a1bd46ecff06-machine-approver-tls\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755788 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-config\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755832 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-audit-policies\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755850 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755878 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dad9bd53-9063-4f82-91e9-7c4563696223-audit-dir\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.755942 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120be070-2828-4e64-ac15-e20d8eb7a59c-config\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.756023 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-serving-cert\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.756246 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c992d5e-0530-450e-a359-afe22329d324-serving-cert\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.756318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d869e800-3cce-4e0d-b822-158858ee632b-serving-cert\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.756341 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-trusted-ca-bundle\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.756386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120be070-2828-4e64-ac15-e20d8eb7a59c-images\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.756407 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2953b210-7e39-430a-824e-a1bd46ecff06-config\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.756483 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-config\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.756560 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28d77447-b859-4037-b20b-ab6ab1de8d5f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.757026 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.758540 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120be070-2828-4e64-ac15-e20d8eb7a59c-images\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.759240 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.759434 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.759622 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.759743 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.759972 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.760037 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.760854 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.760857 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/120be070-2828-4e64-ac15-e20d8eb7a59c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.761000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-serving-cert\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.766770 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.767327 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.767467 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.767804 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.768147 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.768374 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.768648 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.769463 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-encryption-config\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.771279 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.772738 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.775030 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28d77447-b859-4037-b20b-ab6ab1de8d5f-etcd-client\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.780666 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9rsfv"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.781198 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.781789 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.782128 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.794586 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.795546 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.797430 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.802293 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.803177 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.804087 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.808867 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.811369 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.811561 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.819518 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.820384 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.821723 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.822151 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-524qf"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.822817 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.822877 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.822890 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.823811 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.828311 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.829146 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.829452 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568060-6lptj"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.830318 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568060-6lptj" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.832473 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.841947 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4blxs"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.844482 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568062-ckbkj"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.844995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.845252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.845480 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wpxg"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.846338 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.849899 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xlbr5"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.851522 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.853587 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bn5tw"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.855747 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kl6dt"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.856502 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.856709 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kl6dt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857050 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-apiservice-cert\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857083 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2bd26007-3124-4ad1-b3e9-9f875f8e7bc7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9rsfv\" (UID: \"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857117 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-config\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857142 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh95x\" (UniqueName: \"kubernetes.io/projected/a5e1cc78-be1f-45a2-87b3-73c62790c894-kube-api-access-lh95x\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857165 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a1b4989-60a4-401b-aeec-b1fbb536aef1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857191 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-config\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857210 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-client-ca\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857232 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857256 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-config\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c434bdc-13ee-4405-9cba-fc4f7c18c659-trusted-ca\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857297 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857317 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnpq\" (UniqueName: \"kubernetes.io/projected/2bd26007-3124-4ad1-b3e9-9f875f8e7bc7-kube-api-access-2qnpq\") pod \"multus-admission-controller-857f4d67dd-9rsfv\" (UID: \"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857338 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scz25\" (UniqueName: \"kubernetes.io/projected/2a62a49d-2dd6-4378-925e-f361279f446c-kube-api-access-scz25\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857381 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp8fh\" (UniqueName: \"kubernetes.io/projected/3e2f4747-5e21-4512-8ac6-e7544c5c360f-kube-api-access-gp8fh\") pod \"cluster-samples-operator-665b6dd947-zwq9w\" (UID: \"3e2f4747-5e21-4512-8ac6-e7544c5c360f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857412 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqcnn\" (UniqueName: \"kubernetes.io/projected/2953b210-7e39-430a-824e-a1bd46ecff06-kube-api-access-bqcnn\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99sjz\" (UniqueName: \"kubernetes.io/projected/400b3f5b-c001-4186-b86b-006d3bda3396-kube-api-access-99sjz\") pod \"dns-operator-744455d44c-mtbkl\" (UID: \"400b3f5b-c001-4186-b86b-006d3bda3396\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857473 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857498 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-serving-cert\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857520 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857542 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-config\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857568 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0be2c67e-01c1-4548-b94d-99d4c3b97130-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857590 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2qcq\" (UniqueName: \"kubernetes.io/projected/6c434bdc-13ee-4405-9cba-fc4f7c18c659-kube-api-access-p2qcq\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857621 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e37527c-54d4-43e2-be4d-7fb7e98b6019-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857647 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9c992d5e-0530-450e-a359-afe22329d324-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857671 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/8309fb7a-e364-4f9b-a723-a0d4926a0a51-kube-api-access-wqqqf\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857695 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-config\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857718 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-service-ca-bundle\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-service-ca\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-oauth-serving-cert\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857779 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdphx\" (UniqueName: \"kubernetes.io/projected/4387c45a-d8f9-4478-b224-b4e656880aaf-kube-api-access-cdphx\") pod \"downloads-7954f5f757-2g9xk\" (UID: \"4387c45a-d8f9-4478-b224-b4e656880aaf\") " pod="openshift-console/downloads-7954f5f757-2g9xk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857806 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2953b210-7e39-430a-824e-a1bd46ecff06-machine-approver-tls\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857830 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5xfx\" (UniqueName: \"kubernetes.io/projected/4092f902-ab2d-4d16-a90e-f0e28265ee00-kube-api-access-g5xfx\") pod \"control-plane-machine-set-operator-78cbb6b69f-q6jcs\" (UID: \"4092f902-ab2d-4d16-a90e-f0e28265ee00\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857858 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42488cf8-9ce9-4e6f-a6a7-e700926a8627-tmpfs\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857884 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-audit-policies\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857906 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857923 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dad9bd53-9063-4f82-91e9-7c4563696223-audit-dir\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857942 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-trusted-ca-bundle\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c992d5e-0530-450e-a359-afe22329d324-serving-cert\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.857990 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d869e800-3cce-4e0d-b822-158858ee632b-serving-cert\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858009 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858035 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djw9n\" (UniqueName: \"kubernetes.io/projected/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-kube-api-access-djw9n\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858053 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858071 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-ca\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858093 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2953b210-7e39-430a-824e-a1bd46ecff06-config\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858111 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858132 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn8kj\" (UniqueName: \"kubernetes.io/projected/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-kube-api-access-bn8kj\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858166 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a022e41-0a7d-4ab3-bcce-c404160d00c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858189 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858208 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr72d\" (UniqueName: \"kubernetes.io/projected/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-kube-api-access-xr72d\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858227 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a1b4989-60a4-401b-aeec-b1fbb536aef1-proxy-tls\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrf2\" (UniqueName: \"kubernetes.io/projected/1e37527c-54d4-43e2-be4d-7fb7e98b6019-kube-api-access-cqrf2\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858277 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/400b3f5b-c001-4186-b86b-006d3bda3396-metrics-tls\") pod \"dns-operator-744455d44c-mtbkl\" (UID: \"400b3f5b-c001-4186-b86b-006d3bda3396\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858295 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvxn\" (UniqueName: \"kubernetes.io/projected/fa70de2c-7377-4903-9a8b-f889eb315031-kube-api-access-dwvxn\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858315 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be2c67e-01c1-4548-b94d-99d4c3b97130-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858317 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-client-ca\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858335 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a41fabd1-0161-4897-92cd-7398e84f1f04-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858375 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c434bdc-13ee-4405-9cba-fc4f7c18c659-metrics-tls\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858651 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e37527c-54d4-43e2-be4d-7fb7e98b6019-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858679 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-config\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-srv-cert\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858710 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858733 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-srv-cert\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858766 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d6gx\" (UniqueName: \"kubernetes.io/projected/0a1b4989-60a4-401b-aeec-b1fbb536aef1-kube-api-access-4d6gx\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a41fabd1-0161-4897-92cd-7398e84f1f04-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858873 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858905 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-service-ca\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858960 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4092f902-ab2d-4d16-a90e-f0e28265ee00-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q6jcs\" (UID: \"4092f902-ab2d-4d16-a90e-f0e28265ee00\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858989 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c434bdc-13ee-4405-9cba-fc4f7c18c659-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.858998 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-config\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859036 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-serving-cert\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859081 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa70de2c-7377-4903-9a8b-f889eb315031-serving-cert\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859115 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859140 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-config\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859165 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a62a49d-2dd6-4378-925e-f361279f446c-serving-cert\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859191 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64353439-1daf-44b8-bc4d-c91b54936c30-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859212 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859222 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-config\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmdf4\" (UniqueName: \"kubernetes.io/projected/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-kube-api-access-lmdf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859271 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-webhook-cert\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859294 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0be2c67e-01c1-4548-b94d-99d4c3b97130-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859326 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p2rs\" (UniqueName: \"kubernetes.io/projected/dad9bd53-9063-4f82-91e9-7c4563696223-kube-api-access-6p2rs\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859376 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6tg\" (UniqueName: \"kubernetes.io/projected/64d9430a-2f41-4dac-bfa7-9fa47a85db9a-kube-api-access-jc6tg\") pod \"auto-csr-approver-29568060-6lptj\" (UID: \"64d9430a-2f41-4dac-bfa7-9fa47a85db9a\") " pod="openshift-infra/auto-csr-approver-29568060-6lptj" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859422 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-encryption-config\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859448 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-oauth-config\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859487 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dh7j\" (UniqueName: \"kubernetes.io/projected/7a022e41-0a7d-4ab3-bcce-c404160d00c9-kube-api-access-9dh7j\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859514 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9znwm\" (UniqueName: \"kubernetes.io/projected/64353439-1daf-44b8-bc4d-c91b54936c30-kube-api-access-9znwm\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859541 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3289d7-beb6-4959-85b3-e2161abd915b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2z2m\" (UID: \"0b3289d7-beb6-4959-85b3-e2161abd915b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-etcd-client\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859621 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41fabd1-0161-4897-92cd-7398e84f1f04-config\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859646 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdkf\" (UniqueName: \"kubernetes.io/projected/0b3289d7-beb6-4959-85b3-e2161abd915b-kube-api-access-9qdkf\") pod \"package-server-manager-789f6589d5-w2z2m\" (UID: \"0b3289d7-beb6-4959-85b3-e2161abd915b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859694 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e2f4747-5e21-4512-8ac6-e7544c5c360f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zwq9w\" (UID: \"3e2f4747-5e21-4512-8ac6-e7544c5c360f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859731 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-client-ca\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859756 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmnsk\" (UniqueName: \"kubernetes.io/projected/9c992d5e-0530-450e-a359-afe22329d324-kube-api-access-jmnsk\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859787 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j964\" (UniqueName: \"kubernetes.io/projected/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-kube-api-access-2j964\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859811 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859848 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-trusted-ca\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859873 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2953b210-7e39-430a-824e-a1bd46ecff06-auth-proxy-config\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859953 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-serving-cert\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.859981 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64353439-1daf-44b8-bc4d-c91b54936c30-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.860010 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn8jj\" (UniqueName: \"kubernetes.io/projected/42488cf8-9ce9-4e6f-a6a7-e700926a8627-kube-api-access-jn8jj\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.860254 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-client\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.860393 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bk5j\" (UniqueName: \"kubernetes.io/projected/d869e800-3cce-4e0d-b822-158858ee632b-kube-api-access-9bk5j\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.860615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.861743 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-config\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.862574 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.862619 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-trusted-ca-bundle\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.863151 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-client-ca\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.863672 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.864083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa70de2c-7377-4903-9a8b-f889eb315031-serving-cert\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.864138 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtbkl"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.864148 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e37527c-54d4-43e2-be4d-7fb7e98b6019-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.864170 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8ffkm"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.864498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-oauth-serving-cert\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.865130 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-config\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.865762 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e2f4747-5e21-4512-8ac6-e7544c5c360f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zwq9w\" (UID: \"3e2f4747-5e21-4512-8ac6-e7544c5c360f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.866029 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9c992d5e-0530-450e-a359-afe22329d324-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.866091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64353439-1daf-44b8-bc4d-c91b54936c30-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.866476 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-trusted-ca\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.866542 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2953b210-7e39-430a-824e-a1bd46ecff06-auth-proxy-config\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.866983 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2953b210-7e39-430a-824e-a1bd46ecff06-config\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.867199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64353439-1daf-44b8-bc4d-c91b54936c30-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.867281 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2g9xk"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.867330 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.867653 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dad9bd53-9063-4f82-91e9-7c4563696223-audit-dir\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.867658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-audit-policies\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.868017 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa70de2c-7377-4903-9a8b-f889eb315031-service-ca-bundle\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.868038 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dad9bd53-9063-4f82-91e9-7c4563696223-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.868373 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-service-ca\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.868407 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.869894 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c992d5e-0530-450e-a359-afe22329d324-serving-cert\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.870063 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.870066 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-config\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.870246 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.870457 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e37527c-54d4-43e2-be4d-7fb7e98b6019-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.870545 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-oauth-config\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.871283 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.871780 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-serving-cert\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.871946 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.872488 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-serving-cert\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.875606 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v7lxk"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.875652 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qlsc5"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.875667 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.879239 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-encryption-config\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.879475 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wt26g"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.879536 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/400b3f5b-c001-4186-b86b-006d3bda3396-metrics-tls\") pod \"dns-operator-744455d44c-mtbkl\" (UID: \"400b3f5b-c001-4186-b86b-006d3bda3396\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.879613 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2953b210-7e39-430a-824e-a1bd46ecff06-machine-approver-tls\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.880184 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-serving-cert\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.880564 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d869e800-3cce-4e0d-b822-158858ee632b-serving-cert\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.880723 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9rsfv"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.881713 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.882514 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rqlxn"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.884970 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a022e41-0a7d-4ab3-bcce-c404160d00c9-serving-cert\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.893994 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dad9bd53-9063-4f82-91e9-7c4563696223-etcd-client\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.895843 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.899831 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.901487 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.903632 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.905387 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.906600 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bpdhb"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.907918 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.910720 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.912046 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kbfjw"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.913794 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.916596 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.917021 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.918026 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-945r8"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.919323 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-945r8" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.919619 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.922535 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.923976 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kl6dt"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.925436 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4blxs"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.927648 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568062-ckbkj"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.929267 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.930725 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.931074 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wpxg"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.932129 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.933432 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-945r8"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.934820 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.935836 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-524qf"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.936997 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.938146 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568060-6lptj"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.939378 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8n8zb"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.940207 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.940461 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bd9lw"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.941783 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bd9lw"] Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.941897 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.950371 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.962386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6tg\" (UniqueName: \"kubernetes.io/projected/64d9430a-2f41-4dac-bfa7-9fa47a85db9a-kube-api-access-jc6tg\") pod \"auto-csr-approver-29568060-6lptj\" (UID: \"64d9430a-2f41-4dac-bfa7-9fa47a85db9a\") " pod="openshift-infra/auto-csr-approver-29568060-6lptj" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.962530 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3289d7-beb6-4959-85b3-e2161abd915b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2z2m\" (UID: \"0b3289d7-beb6-4959-85b3-e2161abd915b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.962574 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdkf\" (UniqueName: \"kubernetes.io/projected/0b3289d7-beb6-4959-85b3-e2161abd915b-kube-api-access-9qdkf\") pod \"package-server-manager-789f6589d5-w2z2m\" (UID: \"0b3289d7-beb6-4959-85b3-e2161abd915b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.962616 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41fabd1-0161-4897-92cd-7398e84f1f04-config\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.962802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-client\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.962880 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn8jj\" (UniqueName: \"kubernetes.io/projected/42488cf8-9ce9-4e6f-a6a7-e700926a8627-kube-api-access-jn8jj\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.963014 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-apiservice-cert\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.963534 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2bd26007-3124-4ad1-b3e9-9f875f8e7bc7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9rsfv\" (UID: \"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.963586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a1b4989-60a4-401b-aeec-b1fbb536aef1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.963826 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-config\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c434bdc-13ee-4405-9cba-fc4f7c18c659-trusted-ca\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964536 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964563 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qnpq\" (UniqueName: \"kubernetes.io/projected/2bd26007-3124-4ad1-b3e9-9f875f8e7bc7-kube-api-access-2qnpq\") pod \"multus-admission-controller-857f4d67dd-9rsfv\" (UID: \"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964585 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scz25\" (UniqueName: \"kubernetes.io/projected/2a62a49d-2dd6-4378-925e-f361279f446c-kube-api-access-scz25\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964712 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964750 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0be2c67e-01c1-4548-b94d-99d4c3b97130-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964770 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2qcq\" (UniqueName: \"kubernetes.io/projected/6c434bdc-13ee-4405-9cba-fc4f7c18c659-kube-api-access-p2qcq\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964789 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/8309fb7a-e364-4f9b-a723-a0d4926a0a51-kube-api-access-wqqqf\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5xfx\" (UniqueName: \"kubernetes.io/projected/4092f902-ab2d-4d16-a90e-f0e28265ee00-kube-api-access-g5xfx\") pod \"control-plane-machine-set-operator-78cbb6b69f-q6jcs\" (UID: \"4092f902-ab2d-4d16-a90e-f0e28265ee00\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964849 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42488cf8-9ce9-4e6f-a6a7-e700926a8627-tmpfs\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964895 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964910 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djw9n\" (UniqueName: \"kubernetes.io/projected/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-kube-api-access-djw9n\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964930 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964955 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964972 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn8kj\" (UniqueName: \"kubernetes.io/projected/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-kube-api-access-bn8kj\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.964990 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-ca\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965041 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965002 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-config\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965064 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr72d\" (UniqueName: \"kubernetes.io/projected/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-kube-api-access-xr72d\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be2c67e-01c1-4548-b94d-99d4c3b97130-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965110 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a1b4989-60a4-401b-aeec-b1fbb536aef1-proxy-tls\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965135 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-srv-cert\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965152 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-srv-cert\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965414 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c434bdc-13ee-4405-9cba-fc4f7c18c659-trusted-ca\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965168 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a41fabd1-0161-4897-92cd-7398e84f1f04-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965571 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c434bdc-13ee-4405-9cba-fc4f7c18c659-metrics-tls\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965620 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d6gx\" (UniqueName: \"kubernetes.io/projected/0a1b4989-60a4-401b-aeec-b1fbb536aef1-kube-api-access-4d6gx\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965664 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a41fabd1-0161-4897-92cd-7398e84f1f04-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965685 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-service-ca\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965706 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4092f902-ab2d-4d16-a90e-f0e28265ee00-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q6jcs\" (UID: \"4092f902-ab2d-4d16-a90e-f0e28265ee00\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965749 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c434bdc-13ee-4405-9cba-fc4f7c18c659-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a62a49d-2dd6-4378-925e-f361279f446c-serving-cert\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965806 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-config\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965857 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965877 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-webhook-cert\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965904 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0be2c67e-01c1-4548-b94d-99d4c3b97130-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-ca\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.966432 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a1b4989-60a4-401b-aeec-b1fbb536aef1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.965868 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42488cf8-9ce9-4e6f-a6a7-e700926a8627-tmpfs\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.966860 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-client\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.967301 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a62a49d-2dd6-4378-925e-f361279f446c-etcd-service-ca\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.969149 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c434bdc-13ee-4405-9cba-fc4f7c18c659-metrics-tls\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.970985 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a62a49d-2dd6-4378-925e-f361279f446c-serving-cert\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.971184 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.971571 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0be2c67e-01c1-4548-b94d-99d4c3b97130-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.991236 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 21 09:02:23 crc kubenswrapper[4932]: I0321 09:02:23.997021 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be2c67e-01c1-4548-b94d-99d4c3b97130-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.011205 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.031314 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.059395 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.070116 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.090964 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.116192 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.131186 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.150566 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.171162 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.190722 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.210181 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.231633 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.252090 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.270800 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.290802 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.318373 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.331162 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.350919 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.371308 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.391221 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.411923 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.431083 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.450816 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.460664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.470981 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.490680 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.497947 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.511302 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.532291 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.565640 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndkpr\" (UniqueName: \"kubernetes.io/projected/28d77447-b859-4037-b20b-ab6ab1de8d5f-kube-api-access-ndkpr\") pod \"apiserver-76f77b778f-xlbr5\" (UID: \"28d77447-b859-4037-b20b-ab6ab1de8d5f\") " pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.589303 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27l6v\" (UniqueName: \"kubernetes.io/projected/120be070-2828-4e64-ac15-e20d8eb7a59c-kube-api-access-27l6v\") pod \"machine-api-operator-5694c8668f-bn5tw\" (UID: \"120be070-2828-4e64-ac15-e20d8eb7a59c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.590301 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.631109 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.651469 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.671401 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.691477 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.701795 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.712449 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.720405 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a41fabd1-0161-4897-92cd-7398e84f1f04-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.730812 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.750603 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.769741 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.774728 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41fabd1-0161-4897-92cd-7398e84f1f04-config\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.789492 4932 request.go:700] Waited for 1.020208924s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.790747 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.810290 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.818404 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-config\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.831110 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.840799 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.848576 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.851491 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.862524 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.873219 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.880399 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2bd26007-3124-4ad1-b3e9-9f875f8e7bc7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9rsfv\" (UID: \"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.890647 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.911222 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.921517 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a1b4989-60a4-401b-aeec-b1fbb536aef1-proxy-tls\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.930491 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 21 09:02:24 crc kubenswrapper[4932]: I0321 09:02:24.952554 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.967047 4932 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.967146 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-webhook-cert podName:42488cf8-9ce9-4e6f-a6a7-e700926a8627 nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.46711896 +0000 UTC m=+249.062317239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-webhook-cert") pod "packageserver-d55dfcdfc-jqqm4" (UID: "42488cf8-9ce9-4e6f-a6a7-e700926a8627") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.969575 4932 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.969753 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume podName:8309fb7a-e364-4f9b-a723-a0d4926a0a51 nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.46968348 +0000 UTC m=+249.064919121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume") pod "collect-profiles-29568060-fq46x" (UID: "8309fb7a-e364-4f9b-a723-a0d4926a0a51") : failed to sync configmap cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.996773 4932 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.996825 4932 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.996892 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-profile-collector-cert podName:84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.496865455 +0000 UTC m=+249.092063724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-profile-collector-cert") pod "olm-operator-6b444d44fb-jhx9z" (UID: "84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.996964 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume podName:8309fb7a-e364-4f9b-a723-a0d4926a0a51 nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.496933447 +0000 UTC m=+249.092131716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume") pod "collect-profiles-29568060-fq46x" (UID: "8309fb7a-e364-4f9b-a723-a0d4926a0a51") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997060 4932 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997141 4932 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997170 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4092f902-ab2d-4d16-a90e-f0e28265ee00-control-plane-machine-set-operator-tls podName:4092f902-ab2d-4d16-a90e-f0e28265ee00 nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.497145074 +0000 UTC m=+249.092343343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/4092f902-ab2d-4d16-a90e-f0e28265ee00-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-q6jcs" (UID: "4092f902-ab2d-4d16-a90e-f0e28265ee00") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997195 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b3289d7-beb6-4959-85b3-e2161abd915b-package-server-manager-serving-cert podName:0b3289d7-beb6-4959-85b3-e2161abd915b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.497186665 +0000 UTC m=+249.092384934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0b3289d7-beb6-4959-85b3-e2161abd915b-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-w2z2m" (UID: "0b3289d7-beb6-4959-85b3-e2161abd915b") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997246 4932 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997266 4932 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997276 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-srv-cert podName:84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.497268438 +0000 UTC m=+249.092466707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-srv-cert") pod "olm-operator-6b444d44fb-jhx9z" (UID: "84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997300 4932 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997321 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-profile-collector-cert podName:1c1a2ec6-38b9-4009-91dd-1242d286b0ff nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.497303179 +0000 UTC m=+249.092501448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-profile-collector-cert") pod "catalog-operator-68c6474976-th6sl" (UID: "1c1a2ec6-38b9-4009-91dd-1242d286b0ff") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997333 4932 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997341 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-apiservice-cert podName:42488cf8-9ce9-4e6f-a6a7-e700926a8627 nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.49733248 +0000 UTC m=+249.092530749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-apiservice-cert") pod "packageserver-d55dfcdfc-jqqm4" (UID: "42488cf8-9ce9-4e6f-a6a7-e700926a8627") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:24 crc kubenswrapper[4932]: E0321 09:02:24.997384 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-srv-cert podName:1c1a2ec6-38b9-4009-91dd-1242d286b0ff nodeName:}" failed. No retries permitted until 2026-03-21 09:02:25.497374961 +0000 UTC m=+249.092573490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-srv-cert") pod "catalog-operator-68c6474976-th6sl" (UID: "1c1a2ec6-38b9-4009-91dd-1242d286b0ff") : failed to sync secret cache: timed out waiting for the condition Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.000006 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.000406 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.010650 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.031328 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.052431 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.072053 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.089931 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.111846 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.112430 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bn5tw"] Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.134049 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.135664 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xlbr5"] Mar 21 09:02:25 crc kubenswrapper[4932]: W0321 09:02:25.142667 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28d77447_b859_4037_b20b_ab6ab1de8d5f.slice/crio-136f74447a54d9a39bf56e0567b650de392555a92a0185c638f5c66d8be1b68a WatchSource:0}: Error finding container 136f74447a54d9a39bf56e0567b650de392555a92a0185c638f5c66d8be1b68a: Status 404 returned error can't find the container with id 136f74447a54d9a39bf56e0567b650de392555a92a0185c638f5c66d8be1b68a Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.152923 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.171805 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.191537 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.210243 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.230431 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.250718 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.271306 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.291534 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.330931 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.350243 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.369791 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.391391 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.411828 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.429914 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.450973 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.471504 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.491586 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.508043 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.508112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.508179 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-srv-cert\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.508207 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-srv-cert\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.508275 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4092f902-ab2d-4d16-a90e-f0e28265ee00-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q6jcs\" (UID: \"4092f902-ab2d-4d16-a90e-f0e28265ee00\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.508302 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.509304 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-webhook-cert\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.509420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3289d7-beb6-4959-85b3-e2161abd915b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2z2m\" (UID: \"0b3289d7-beb6-4959-85b3-e2161abd915b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.509514 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-apiservice-cert\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.509541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.509910 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.512699 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.515785 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4092f902-ab2d-4d16-a90e-f0e28265ee00-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q6jcs\" (UID: \"4092f902-ab2d-4d16-a90e-f0e28265ee00\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.515807 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-webhook-cert\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.515884 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42488cf8-9ce9-4e6f-a6a7-e700926a8627-apiservice-cert\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.516139 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.516501 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.517517 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-srv-cert\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.517598 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-profile-collector-cert\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.517691 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b3289d7-beb6-4959-85b3-e2161abd915b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w2z2m\" (UID: \"0b3289d7-beb6-4959-85b3-e2161abd915b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.518379 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-srv-cert\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.538106 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.550083 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.570627 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.591522 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.610455 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.631020 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.650571 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.685461 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh95x\" (UniqueName: \"kubernetes.io/projected/a5e1cc78-be1f-45a2-87b3-73c62790c894-kube-api-access-lh95x\") pod \"console-f9d7485db-v7lxk\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.716320 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp8fh\" (UniqueName: \"kubernetes.io/projected/3e2f4747-5e21-4512-8ac6-e7544c5c360f-kube-api-access-gp8fh\") pod \"cluster-samples-operator-665b6dd947-zwq9w\" (UID: \"3e2f4747-5e21-4512-8ac6-e7544c5c360f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.731394 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqcnn\" (UniqueName: \"kubernetes.io/projected/2953b210-7e39-430a-824e-a1bd46ecff06-kube-api-access-bqcnn\") pod \"machine-approver-56656f9798-r7htc\" (UID: \"2953b210-7e39-430a-824e-a1bd46ecff06\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.744096 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99sjz\" (UniqueName: \"kubernetes.io/projected/400b3f5b-c001-4186-b86b-006d3bda3396-kube-api-access-99sjz\") pod \"dns-operator-744455d44c-mtbkl\" (UID: \"400b3f5b-c001-4186-b86b-006d3bda3396\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.766889 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.783132 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p2rs\" (UniqueName: \"kubernetes.io/projected/dad9bd53-9063-4f82-91e9-7c4563696223-kube-api-access-6p2rs\") pod \"apiserver-7bbb656c7d-ph6bm\" (UID: \"dad9bd53-9063-4f82-91e9-7c4563696223\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.789598 4932 request.go:700] Waited for 1.928962253s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.804992 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bk5j\" (UniqueName: \"kubernetes.io/projected/d869e800-3cce-4e0d-b822-158858ee632b-kube-api-access-9bk5j\") pod \"controller-manager-879f6c89f-8ffkm\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.815426 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.824471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dh7j\" (UniqueName: \"kubernetes.io/projected/7a022e41-0a7d-4ab3-bcce-c404160d00c9-kube-api-access-9dh7j\") pod \"route-controller-manager-6576b87f9c-lcz8s\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.847341 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9znwm\" (UniqueName: \"kubernetes.io/projected/64353439-1daf-44b8-bc4d-c91b54936c30-kube-api-access-9znwm\") pod \"openshift-controller-manager-operator-756b6f6bc6-764x4\" (UID: \"64353439-1daf-44b8-bc4d-c91b54936c30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.849990 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.865497 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmnsk\" (UniqueName: \"kubernetes.io/projected/9c992d5e-0530-450e-a359-afe22329d324-kube-api-access-jmnsk\") pod \"openshift-config-operator-7777fb866f-6qm6c\" (UID: \"9c992d5e-0530-450e-a359-afe22329d324\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.902491 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.912217 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.913223 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j964\" (UniqueName: \"kubernetes.io/projected/26b21258-5475-4f3e-bc9e-3f5c8e7fc83f-kube-api-access-2j964\") pod \"console-operator-58897d9998-bpdhb\" (UID: \"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f\") " pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.914590 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmdf4\" (UniqueName: \"kubernetes.io/projected/b875799e-5cf0-4c34-8cd0-e66f896cb3aa-kube-api-access-lmdf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8g4ft\" (UID: \"b875799e-5cf0-4c34-8cd0-e66f896cb3aa\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.919376 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.926650 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.928093 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdphx\" (UniqueName: \"kubernetes.io/projected/4387c45a-d8f9-4478-b224-b4e656880aaf-kube-api-access-cdphx\") pod \"downloads-7954f5f757-2g9xk\" (UID: \"4387c45a-d8f9-4478-b224-b4e656880aaf\") " pod="openshift-console/downloads-7954f5f757-2g9xk" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.936295 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" event={"ID":"120be070-2828-4e64-ac15-e20d8eb7a59c","Type":"ContainerStarted","Data":"40b9dc6e6136cacbd965451e822bcabb08860d43279dd9a34f568d2db4e33643"} Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.936386 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" event={"ID":"120be070-2828-4e64-ac15-e20d8eb7a59c","Type":"ContainerStarted","Data":"e9d178774188554c58ce0a45a6050baa4a7fbe76825844bca3e6befbebb13246"} Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.936400 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" event={"ID":"120be070-2828-4e64-ac15-e20d8eb7a59c","Type":"ContainerStarted","Data":"3fa6f70eef7d0175284fd88caf0efb1593e012a845de7900dbff06bb23ed48dc"} Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.938268 4932 generic.go:334] "Generic (PLEG): container finished" podID="28d77447-b859-4037-b20b-ab6ab1de8d5f" containerID="29c2c186da354dea974bb59a8be2d7387bf494fd8ccc7e932e517af3fb6a92a4" exitCode=0 Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.938302 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" event={"ID":"28d77447-b859-4037-b20b-ab6ab1de8d5f","Type":"ContainerDied","Data":"29c2c186da354dea974bb59a8be2d7387bf494fd8ccc7e932e517af3fb6a92a4"} Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.938322 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" event={"ID":"28d77447-b859-4037-b20b-ab6ab1de8d5f","Type":"ContainerStarted","Data":"136f74447a54d9a39bf56e0567b650de392555a92a0185c638f5c66d8be1b68a"} Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.939196 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2g9xk" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.946058 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.949851 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrf2\" (UniqueName: \"kubernetes.io/projected/1e37527c-54d4-43e2-be4d-7fb7e98b6019-kube-api-access-cqrf2\") pod \"openshift-apiserver-operator-796bbdcf4f-lbpfm\" (UID: \"1e37527c-54d4-43e2-be4d-7fb7e98b6019\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.956694 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.966620 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.967381 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvxn\" (UniqueName: \"kubernetes.io/projected/fa70de2c-7377-4903-9a8b-f889eb315031-kube-api-access-dwvxn\") pod \"authentication-operator-69f744f599-rqlxn\" (UID: \"fa70de2c-7377-4903-9a8b-f889eb315031\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.971256 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 21 09:02:25 crc kubenswrapper[4932]: I0321 09:02:25.991190 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.012139 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.012228 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.030355 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.033845 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.050552 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.058595 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.075393 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8ffkm"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.079666 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.092059 4932 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.105070 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.111411 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 21 09:02:26 crc kubenswrapper[4932]: W0321 09:02:26.117097 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd869e800_3cce_4e0d_b822_158858ee632b.slice/crio-0eb0e12e073e44053d982e4ba2958b14b4c5d1d615bbf1fc0f489e79e69d4af0 WatchSource:0}: Error finding container 0eb0e12e073e44053d982e4ba2958b14b4c5d1d615bbf1fc0f489e79e69d4af0: Status 404 returned error can't find the container with id 0eb0e12e073e44053d982e4ba2958b14b4c5d1d615bbf1fc0f489e79e69d4af0 Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.129803 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 21 09:02:26 crc kubenswrapper[4932]: W0321 09:02:26.142850 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a022e41_0a7d_4ab3_bcce_c404160d00c9.slice/crio-f23b8a629caea107f5dbe1d7776fc0280dc5b0b83dec64ac2b627f33f0021d8f WatchSource:0}: Error finding container f23b8a629caea107f5dbe1d7776fc0280dc5b0b83dec64ac2b627f33f0021d8f: Status 404 returned error can't find the container with id f23b8a629caea107f5dbe1d7776fc0280dc5b0b83dec64ac2b627f33f0021d8f Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.159917 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.170044 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6tg\" (UniqueName: \"kubernetes.io/projected/64d9430a-2f41-4dac-bfa7-9fa47a85db9a-kube-api-access-jc6tg\") pod \"auto-csr-approver-29568060-6lptj\" (UID: \"64d9430a-2f41-4dac-bfa7-9fa47a85db9a\") " pod="openshift-infra/auto-csr-approver-29568060-6lptj" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.201012 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdkf\" (UniqueName: \"kubernetes.io/projected/0b3289d7-beb6-4959-85b3-e2161abd915b-kube-api-access-9qdkf\") pod \"package-server-manager-789f6589d5-w2z2m\" (UID: \"0b3289d7-beb6-4959-85b3-e2161abd915b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.206397 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn8jj\" (UniqueName: \"kubernetes.io/projected/42488cf8-9ce9-4e6f-a6a7-e700926a8627-kube-api-access-jn8jj\") pod \"packageserver-d55dfcdfc-jqqm4\" (UID: \"42488cf8-9ce9-4e6f-a6a7-e700926a8627\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.214516 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.220965 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.237406 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.241545 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qnpq\" (UniqueName: \"kubernetes.io/projected/2bd26007-3124-4ad1-b3e9-9f875f8e7bc7-kube-api-access-2qnpq\") pod \"multus-admission-controller-857f4d67dd-9rsfv\" (UID: \"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.266398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a41fabd1-0161-4897-92cd-7398e84f1f04-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nnhb7\" (UID: \"a41fabd1-0161-4897-92cd-7398e84f1f04\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.270027 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568060-6lptj" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.274063 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scz25\" (UniqueName: \"kubernetes.io/projected/2a62a49d-2dd6-4378-925e-f361279f446c-kube-api-access-scz25\") pod \"etcd-operator-b45778765-qlsc5\" (UID: \"2a62a49d-2dd6-4378-925e-f361279f446c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.298119 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr72d\" (UniqueName: \"kubernetes.io/projected/84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f-kube-api-access-xr72d\") pod \"olm-operator-6b444d44fb-jhx9z\" (UID: \"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.312217 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0be2c67e-01c1-4548-b94d-99d4c3b97130-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmpnv\" (UID: \"0be2c67e-01c1-4548-b94d-99d4c3b97130\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.338281 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2qcq\" (UniqueName: \"kubernetes.io/projected/6c434bdc-13ee-4405-9cba-fc4f7c18c659-kube-api-access-p2qcq\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.351979 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/8309fb7a-e364-4f9b-a723-a0d4926a0a51-kube-api-access-wqqqf\") pod \"collect-profiles-29568060-fq46x\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.377328 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5xfx\" (UniqueName: \"kubernetes.io/projected/4092f902-ab2d-4d16-a90e-f0e28265ee00-kube-api-access-g5xfx\") pod \"control-plane-machine-set-operator-78cbb6b69f-q6jcs\" (UID: \"4092f902-ab2d-4d16-a90e-f0e28265ee00\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.395128 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.399175 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d6gx\" (UniqueName: \"kubernetes.io/projected/0a1b4989-60a4-401b-aeec-b1fbb536aef1-kube-api-access-4d6gx\") pod \"machine-config-controller-84d6567774-wxwjq\" (UID: \"0a1b4989-60a4-401b-aeec-b1fbb536aef1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.410682 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.427526 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djw9n\" (UniqueName: \"kubernetes.io/projected/1c1a2ec6-38b9-4009-91dd-1242d286b0ff-kube-api-access-djw9n\") pod \"catalog-operator-68c6474976-th6sl\" (UID: \"1c1a2ec6-38b9-4009-91dd-1242d286b0ff\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.427590 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn8kj\" (UniqueName: \"kubernetes.io/projected/2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4-kube-api-access-bn8kj\") pod \"kube-storage-version-migrator-operator-b67b599dd-94dz9\" (UID: \"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.434320 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.444343 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.452480 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2g9xk"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.459329 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.461779 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtbkl"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.466793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da6e3e5c-4596-49b1-b2ce-15a1dca234b7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4t2nv\" (UID: \"da6e3e5c-4596-49b1-b2ce-15a1dca234b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.471222 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c434bdc-13ee-4405-9cba-fc4f7c18c659-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4h8rg\" (UID: \"6c434bdc-13ee-4405-9cba-fc4f7c18c659\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.472779 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.481733 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.490961 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.501625 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.507829 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.509739 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.510037 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.519728 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.525156 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.546436 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:26 crc kubenswrapper[4932]: W0321 09:02:26.558943 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb875799e_5cf0_4c34_8cd0_e66f896cb3aa.slice/crio-6910daec3ae20906facd5392d37449135194cfd602beccbcd900e92373d654f0 WatchSource:0}: Error finding container 6910daec3ae20906facd5392d37449135194cfd602beccbcd900e92373d654f0: Status 404 returned error can't find the container with id 6910daec3ae20906facd5392d37449135194cfd602beccbcd900e92373d654f0 Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636317 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636426 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5zc\" (UniqueName: \"kubernetes.io/projected/341989e4-9f64-45f9-85f7-55bb89ca0447-kube-api-access-rh5zc\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636446 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636509 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636529 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-stats-auth\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636566 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bsgc\" (UniqueName: \"kubernetes.io/projected/ff789aae-664d-4298-8122-f1d0751118b8-kube-api-access-5bsgc\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636584 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/341989e4-9f64-45f9-85f7-55bb89ca0447-signing-key\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636636 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b870d2fa-ac84-43c4-a552-73ab8a723e12-service-ca-bundle\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636652 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636689 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-trusted-ca\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636715 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhm8s\" (UniqueName: \"kubernetes.io/projected/b870d2fa-ac84-43c4-a552-73ab8a723e12-kube-api-access-vhm8s\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636742 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff789aae-664d-4298-8122-f1d0751118b8-proxy-tls\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636761 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrp5m\" (UniqueName: \"kubernetes.io/projected/a1dbcfef-3656-4dac-82a6-78cc48d655df-kube-api-access-hrp5m\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636803 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636826 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-dir\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636850 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636867 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-metrics-certs\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636886 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636903 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-certificates\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636918 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/341989e4-9f64-45f9-85f7-55bb89ca0447-signing-cabundle\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636935 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-policies\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636953 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636970 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4hh\" (UniqueName: \"kubernetes.io/projected/cc4faed3-5d31-4c1d-bca1-140b12b1ec30-kube-api-access-kk4hh\") pod \"migrator-59844c95c7-szhmn\" (UID: \"cc4faed3-5d31-4c1d-bca1-140b12b1ec30\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.636988 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-tls\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637006 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637063 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637085 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637107 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637125 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-default-certificate\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637162 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff789aae-664d-4298-8122-f1d0751118b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637179 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-bound-sa-token\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637196 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbwbd\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-kube-api-access-bbwbd\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.637216 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff789aae-664d-4298-8122-f1d0751118b8-images\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: E0321 09:02:26.639510 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.139487001 +0000 UTC m=+250.734685360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.665020 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.697036 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bpdhb"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.703884 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.740558 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:26 crc kubenswrapper[4932]: E0321 09:02:26.740758 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.240725786 +0000 UTC m=+250.835924055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.740988 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741018 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-mountpoint-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741095 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0c3468db-1929-4073-af49-e43a543ef0ae-node-bootstrap-token\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741140 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741165 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741225 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-default-certificate\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741265 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff789aae-664d-4298-8122-f1d0751118b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741319 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-bound-sa-token\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741461 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbwbd\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-kube-api-access-bbwbd\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741513 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18878259-949d-4a7b-886e-cd390bd5f74e-config\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741576 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff789aae-664d-4298-8122-f1d0751118b8-images\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741600 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741675 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741695 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh5zc\" (UniqueName: \"kubernetes.io/projected/341989e4-9f64-45f9-85f7-55bb89ca0447-kube-api-access-rh5zc\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741712 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.741765 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747009 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-stats-auth\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747048 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bsgc\" (UniqueName: \"kubernetes.io/projected/ff789aae-664d-4298-8122-f1d0751118b8-kube-api-access-5bsgc\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747214 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747273 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/341989e4-9f64-45f9-85f7-55bb89ca0447-signing-key\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747300 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b870d2fa-ac84-43c4-a552-73ab8a723e12-service-ca-bundle\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747342 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdxf\" (UniqueName: \"kubernetes.io/projected/e980bce8-8b4c-4e96-b7a8-7b5465cfeae4-kube-api-access-drdxf\") pod \"ingress-canary-kl6dt\" (UID: \"e980bce8-8b4c-4e96-b7a8-7b5465cfeae4\") " pod="openshift-ingress-canary/ingress-canary-kl6dt" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747406 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr4qd\" (UniqueName: \"kubernetes.io/projected/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-kube-api-access-xr4qd\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747627 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-config-volume\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747762 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-trusted-ca\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747790 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhm8s\" (UniqueName: \"kubernetes.io/projected/b870d2fa-ac84-43c4-a552-73ab8a723e12-kube-api-access-vhm8s\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747885 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff789aae-664d-4298-8122-f1d0751118b8-proxy-tls\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747925 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2gq\" (UniqueName: \"kubernetes.io/projected/e61399d9-1d7d-4109-b634-3bbcdad81d2a-kube-api-access-6q2gq\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.747996 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwf88\" (UniqueName: \"kubernetes.io/projected/0c3468db-1929-4073-af49-e43a543ef0ae-kube-api-access-jwf88\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748021 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxjh\" (UniqueName: \"kubernetes.io/projected/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-kube-api-access-rbxjh\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748095 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748120 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrp5m\" (UniqueName: \"kubernetes.io/projected/a1dbcfef-3656-4dac-82a6-78cc48d655df-kube-api-access-hrp5m\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748157 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18878259-949d-4a7b-886e-cd390bd5f74e-serving-cert\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748231 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748486 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e980bce8-8b4c-4e96-b7a8-7b5465cfeae4-cert\") pod \"ingress-canary-kl6dt\" (UID: \"e980bce8-8b4c-4e96-b7a8-7b5465cfeae4\") " pod="openshift-ingress-canary/ingress-canary-kl6dt" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748715 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-dir\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748756 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-metrics-tls\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748797 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.748963 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-metrics-certs\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749318 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749441 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-certificates\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/341989e4-9f64-45f9-85f7-55bb89ca0447-signing-cabundle\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749495 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-policies\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749605 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2mg\" (UniqueName: \"kubernetes.io/projected/18878259-949d-4a7b-886e-cd390bd5f74e-kube-api-access-xb2mg\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749675 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9v9\" (UniqueName: \"kubernetes.io/projected/026bb1a2-7881-45a8-8845-53d8bbcb4166-kube-api-access-vh9v9\") pod \"auto-csr-approver-29568062-ckbkj\" (UID: \"026bb1a2-7881-45a8-8845-53d8bbcb4166\") " pod="openshift-infra/auto-csr-approver-29568062-ckbkj" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749763 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-csi-data-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749791 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749905 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-socket-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.749962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-tls\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.750006 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4hh\" (UniqueName: \"kubernetes.io/projected/cc4faed3-5d31-4c1d-bca1-140b12b1ec30-kube-api-access-kk4hh\") pod \"migrator-59844c95c7-szhmn\" (UID: \"cc4faed3-5d31-4c1d-bca1-140b12b1ec30\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.750036 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.750253 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-plugins-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.750383 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.750489 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0c3468db-1929-4073-af49-e43a543ef0ae-certs\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.750510 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-registration-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.766741 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.768134 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.769218 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-dir\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.771158 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-policies\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.782555 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-certificates\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.784010 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.796729 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.801471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.803054 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.803790 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.804162 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v7lxk"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.804540 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff789aae-664d-4298-8122-f1d0751118b8-images\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.804825 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.805286 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/341989e4-9f64-45f9-85f7-55bb89ca0447-signing-cabundle\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: E0321 09:02:26.805859 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.305840573 +0000 UTC m=+250.901038842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.805855 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-trusted-ca\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.805942 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.806056 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.806543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-default-certificate\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.806609 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff789aae-664d-4298-8122-f1d0751118b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.808313 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b870d2fa-ac84-43c4-a552-73ab8a723e12-service-ca-bundle\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.808271 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.808785 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/341989e4-9f64-45f9-85f7-55bb89ca0447-signing-key\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.808978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-metrics-certs\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.809822 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.810407 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-tls\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.810399 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b870d2fa-ac84-43c4-a552-73ab8a723e12-stats-auth\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.823183 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rqlxn"] Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.825964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.827099 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff789aae-664d-4298-8122-f1d0751118b8-proxy-tls\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.827329 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bsgc\" (UniqueName: \"kubernetes.io/projected/ff789aae-664d-4298-8122-f1d0751118b8-kube-api-access-5bsgc\") pod \"machine-config-operator-74547568cd-r2clz\" (UID: \"ff789aae-664d-4298-8122-f1d0751118b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.833366 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.833492 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh5zc\" (UniqueName: \"kubernetes.io/projected/341989e4-9f64-45f9-85f7-55bb89ca0447-kube-api-access-rh5zc\") pod \"service-ca-9c57cc56f-524qf\" (UID: \"341989e4-9f64-45f9-85f7-55bb89ca0447\") " pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.837645 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhm8s\" (UniqueName: \"kubernetes.io/projected/b870d2fa-ac84-43c4-a552-73ab8a723e12-kube-api-access-vhm8s\") pod \"router-default-5444994796-z8gns\" (UID: \"b870d2fa-ac84-43c4-a552-73ab8a723e12\") " pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.851189 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrp5m\" (UniqueName: \"kubernetes.io/projected/a1dbcfef-3656-4dac-82a6-78cc48d655df-kube-api-access-hrp5m\") pod \"oauth-openshift-558db77b4-wt26g\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.852645 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.852787 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-mountpoint-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.852815 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0c3468db-1929-4073-af49-e43a543ef0ae-node-bootstrap-token\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.852870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18878259-949d-4a7b-886e-cd390bd5f74e-config\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.852889 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.852929 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr4qd\" (UniqueName: \"kubernetes.io/projected/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-kube-api-access-xr4qd\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.852949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdxf\" (UniqueName: \"kubernetes.io/projected/e980bce8-8b4c-4e96-b7a8-7b5465cfeae4-kube-api-access-drdxf\") pod \"ingress-canary-kl6dt\" (UID: \"e980bce8-8b4c-4e96-b7a8-7b5465cfeae4\") " pod="openshift-ingress-canary/ingress-canary-kl6dt" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.852978 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-config-volume\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853004 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2gq\" (UniqueName: \"kubernetes.io/projected/e61399d9-1d7d-4109-b634-3bbcdad81d2a-kube-api-access-6q2gq\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853023 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbxjh\" (UniqueName: \"kubernetes.io/projected/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-kube-api-access-rbxjh\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853042 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwf88\" (UniqueName: \"kubernetes.io/projected/0c3468db-1929-4073-af49-e43a543ef0ae-kube-api-access-jwf88\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853263 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18878259-949d-4a7b-886e-cd390bd5f74e-serving-cert\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e980bce8-8b4c-4e96-b7a8-7b5465cfeae4-cert\") pod \"ingress-canary-kl6dt\" (UID: \"e980bce8-8b4c-4e96-b7a8-7b5465cfeae4\") " pod="openshift-ingress-canary/ingress-canary-kl6dt" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853463 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-metrics-tls\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853509 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb2mg\" (UniqueName: \"kubernetes.io/projected/18878259-949d-4a7b-886e-cd390bd5f74e-kube-api-access-xb2mg\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853529 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9v9\" (UniqueName: \"kubernetes.io/projected/026bb1a2-7881-45a8-8845-53d8bbcb4166-kube-api-access-vh9v9\") pod \"auto-csr-approver-29568062-ckbkj\" (UID: \"026bb1a2-7881-45a8-8845-53d8bbcb4166\") " pod="openshift-infra/auto-csr-approver-29568062-ckbkj" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853547 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-csi-data-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853565 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-socket-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853599 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853620 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-plugins-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853636 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-registration-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.853652 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0c3468db-1929-4073-af49-e43a543ef0ae-certs\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.854774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-config-volume\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.855012 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-mountpoint-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.855032 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-csi-data-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: E0321 09:02:26.855091 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.355073513 +0000 UTC m=+250.950271782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.855242 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-socket-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.855329 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-registration-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.855264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e61399d9-1d7d-4109-b634-3bbcdad81d2a-plugins-dir\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.855967 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18878259-949d-4a7b-886e-cd390bd5f74e-config\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.856646 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.862208 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18878259-949d-4a7b-886e-cd390bd5f74e-serving-cert\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.863988 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.868843 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-metrics-tls\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.868923 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0c3468db-1929-4073-af49-e43a543ef0ae-certs\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.869210 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbwbd\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-kube-api-access-bbwbd\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.869238 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e980bce8-8b4c-4e96-b7a8-7b5465cfeae4-cert\") pod \"ingress-canary-kl6dt\" (UID: \"e980bce8-8b4c-4e96-b7a8-7b5465cfeae4\") " pod="openshift-ingress-canary/ingress-canary-kl6dt" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.871900 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0c3468db-1929-4073-af49-e43a543ef0ae-node-bootstrap-token\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.889056 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-bound-sa-token\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.913046 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4hh\" (UniqueName: \"kubernetes.io/projected/cc4faed3-5d31-4c1d-bca1-140b12b1ec30-kube-api-access-kk4hh\") pod \"migrator-59844c95c7-szhmn\" (UID: \"cc4faed3-5d31-4c1d-bca1-140b12b1ec30\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.946842 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwf88\" (UniqueName: \"kubernetes.io/projected/0c3468db-1929-4073-af49-e43a543ef0ae-kube-api-access-jwf88\") pod \"machine-config-server-8n8zb\" (UID: \"0c3468db-1929-4073-af49-e43a543ef0ae\") " pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.947337 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" event={"ID":"28d77447-b859-4037-b20b-ab6ab1de8d5f","Type":"ContainerStarted","Data":"899de10a3d70bcf1f3f10ff0e47079534a444d17682e0c0a0105b3b3020f29d5"} Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.951079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" event={"ID":"64353439-1daf-44b8-bc4d-c91b54936c30","Type":"ContainerStarted","Data":"f8ba097c5ea43c1eb2224de029f4fbc35fdb076b29e082e72d39ffe9c48bd989"} Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.962923 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.964098 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568060-6lptj"] Mar 21 09:02:26 crc kubenswrapper[4932]: E0321 09:02:26.964118 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.463520514 +0000 UTC m=+251.058718783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.967535 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" event={"ID":"7a022e41-0a7d-4ab3-bcce-c404160d00c9","Type":"ContainerStarted","Data":"8209e9cddd8cc4231930316e06f635c24d8af46405f07407f81968eb0deeea93"} Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.967568 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" event={"ID":"7a022e41-0a7d-4ab3-bcce-c404160d00c9","Type":"ContainerStarted","Data":"f23b8a629caea107f5dbe1d7776fc0280dc5b0b83dec64ac2b627f33f0021d8f"} Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.968270 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.971728 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" event={"ID":"d869e800-3cce-4e0d-b822-158858ee632b","Type":"ContainerStarted","Data":"4d534460872774905ff8ca893964eab4b655f60474cb295b884808dc476c3cfe"} Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.971787 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" event={"ID":"d869e800-3cce-4e0d-b822-158858ee632b","Type":"ContainerStarted","Data":"0eb0e12e073e44053d982e4ba2958b14b4c5d1d615bbf1fc0f489e79e69d4af0"} Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.972790 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.981600 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" event={"ID":"3e2f4747-5e21-4512-8ac6-e7544c5c360f","Type":"ContainerStarted","Data":"1159d7bbef3368a2c0f4e5de657f08c26e2c7f8d98a87a4c3f538ef9a359deb1"} Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.987421 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbxjh\" (UniqueName: \"kubernetes.io/projected/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-kube-api-access-rbxjh\") pod \"marketplace-operator-79b997595-6wpxg\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.990973 4932 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lcz8s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.991033 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" podUID="7a022e41-0a7d-4ab3-bcce-c404160d00c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.995009 4932 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8ffkm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.995036 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" podUID="d869e800-3cce-4e0d-b822-158858ee632b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.995335 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" event={"ID":"2953b210-7e39-430a-824e-a1bd46ecff06","Type":"ContainerStarted","Data":"a9bb0de5dd4915fc90ad3abe5fb6e5dfc694869e87bc31a74173ef77834d4d98"} Mar 21 09:02:26 crc kubenswrapper[4932]: I0321 09:02:26.995434 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" event={"ID":"2953b210-7e39-430a-824e-a1bd46ecff06","Type":"ContainerStarted","Data":"b4f8138e44b4da8bbf6682e227e5e22b673cb3e305e493e5713c074a5dbf4568"} Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.004841 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2gq\" (UniqueName: \"kubernetes.io/projected/e61399d9-1d7d-4109-b634-3bbcdad81d2a-kube-api-access-6q2gq\") pod \"csi-hostpathplugin-bd9lw\" (UID: \"e61399d9-1d7d-4109-b634-3bbcdad81d2a\") " pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.020784 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.026819 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.058426 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr4qd\" (UniqueName: \"kubernetes.io/projected/3c8ae130-bd13-49ec-a21c-bb7a4c022a8b-kube-api-access-xr4qd\") pod \"dns-default-945r8\" (UID: \"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b\") " pod="openshift-dns/dns-default-945r8" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.059482 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdxf\" (UniqueName: \"kubernetes.io/projected/e980bce8-8b4c-4e96-b7a8-7b5465cfeae4-kube-api-access-drdxf\") pod \"ingress-canary-kl6dt\" (UID: \"e980bce8-8b4c-4e96-b7a8-7b5465cfeae4\") " pod="openshift-ingress-canary/ingress-canary-kl6dt" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.064310 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.064387 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.065443 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" event={"ID":"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f","Type":"ContainerStarted","Data":"8bfd4600d7edccaea562b66b94b441d89c432fc68ffdd62ec21d279eb7361d4c"} Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.065479 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.565456231 +0000 UTC m=+251.160654500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.075963 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" event={"ID":"9c992d5e-0530-450e-a359-afe22329d324","Type":"ContainerStarted","Data":"71f15706a4de627272d050ce4370a915803ad4bff28b40c3bd3bda83cff2b203"} Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.077881 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb2mg\" (UniqueName: \"kubernetes.io/projected/18878259-949d-4a7b-886e-cd390bd5f74e-kube-api-access-xb2mg\") pod \"service-ca-operator-777779d784-4blxs\" (UID: \"18878259-949d-4a7b-886e-cd390bd5f74e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.084427 4932 generic.go:334] "Generic (PLEG): container finished" podID="dad9bd53-9063-4f82-91e9-7c4563696223" containerID="519842f587f499d3ee09f511187d2909290148a383c3ba31e459fed7a4d14b5d" exitCode=0 Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.085498 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" event={"ID":"dad9bd53-9063-4f82-91e9-7c4563696223","Type":"ContainerDied","Data":"519842f587f499d3ee09f511187d2909290148a383c3ba31e459fed7a4d14b5d"} Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.085963 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" event={"ID":"dad9bd53-9063-4f82-91e9-7c4563696223","Type":"ContainerStarted","Data":"8da1a0f1a4583b8a24cd0a39b0e1d96760c6c48f2fc87ebec691a280805812b8"} Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.091125 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.101495 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" event={"ID":"b875799e-5cf0-4c34-8cd0-e66f896cb3aa","Type":"ContainerStarted","Data":"6910daec3ae20906facd5392d37449135194cfd602beccbcd900e92373d654f0"} Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.102280 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9v9\" (UniqueName: \"kubernetes.io/projected/026bb1a2-7881-45a8-8845-53d8bbcb4166-kube-api-access-vh9v9\") pod \"auto-csr-approver-29568062-ckbkj\" (UID: \"026bb1a2-7881-45a8-8845-53d8bbcb4166\") " pod="openshift-infra/auto-csr-approver-29568062-ckbkj" Mar 21 09:02:27 crc kubenswrapper[4932]: W0321 09:02:27.111812 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64d9430a_2f41_4dac_bfa7_9fa47a85db9a.slice/crio-a83d27cb9248b45acb70811840e27868f76205f4e1db393772b15f93ff7e59dd WatchSource:0}: Error finding container a83d27cb9248b45acb70811840e27868f76205f4e1db393772b15f93ff7e59dd: Status 404 returned error can't find the container with id a83d27cb9248b45acb70811840e27868f76205f4e1db393772b15f93ff7e59dd Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.122013 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2g9xk" event={"ID":"4387c45a-d8f9-4478-b224-b4e656880aaf","Type":"ContainerStarted","Data":"7af84661402b38d77fa93eae31bca37e99b75d629e2779736f4e1d47117250aa"} Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.127238 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" event={"ID":"400b3f5b-c001-4186-b86b-006d3bda3396","Type":"ContainerStarted","Data":"920985f1d325479693b15b2b8834e61c3a02ce16ad34e6391ba5dd04615c046f"} Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.130242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-524qf" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.141816 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.169662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.171945 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.67191464 +0000 UTC m=+251.267113069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.179287 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.185249 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.206409 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.213809 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kl6dt" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.222264 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-945r8" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.228405 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8n8zb" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.235104 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.274248 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.274691 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.774674803 +0000 UTC m=+251.369873062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.327070 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.348426 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.375726 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qlsc5"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.376608 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.385895 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.88586493 +0000 UTC m=+251.481063199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.399768 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.441636 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.464703 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9rsfv"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.479197 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.479656 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:27.979638141 +0000 UTC m=+251.574836410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.501783 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.523834 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.583048 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.587804 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.087777462 +0000 UTC m=+251.682975871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.596819 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.625755 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.639478 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z"] Mar 21 09:02:27 crc kubenswrapper[4932]: W0321 09:02:27.669887 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42488cf8_9ce9_4e6f_a6a7_e700926a8627.slice/crio-f071901c8d69069140cb979c21ae7e611ce05f8326bf8580a9450fc6b4a2d662 WatchSource:0}: Error finding container f071901c8d69069140cb979c21ae7e611ce05f8326bf8580a9450fc6b4a2d662: Status 404 returned error can't find the container with id f071901c8d69069140cb979c21ae7e611ce05f8326bf8580a9450fc6b4a2d662 Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.697673 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.697944 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.197910637 +0000 UTC m=+251.793108906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.698242 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.698783 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.198763953 +0000 UTC m=+251.793962222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: W0321 09:02:27.745410 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda41fabd1_0161_4897_92cd_7398e84f1f04.slice/crio-38f9b091af40ce3dda02f53a8bd60782a6d78d2fcbb3a2a00f4045118ca2c631 WatchSource:0}: Error finding container 38f9b091af40ce3dda02f53a8bd60782a6d78d2fcbb3a2a00f4045118ca2c631: Status 404 returned error can't find the container with id 38f9b091af40ce3dda02f53a8bd60782a6d78d2fcbb3a2a00f4045118ca2c631 Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.799386 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.799935 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.299896165 +0000 UTC m=+251.895094434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.818498 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.818542 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv"] Mar 21 09:02:27 crc kubenswrapper[4932]: W0321 09:02:27.874473 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c3468db_1929_4073_af49_e43a543ef0ae.slice/crio-e8af9850b2017083afb36e4e0f77410bcb324c64de5b80fdf056b63ba2dd0710 WatchSource:0}: Error finding container e8af9850b2017083afb36e4e0f77410bcb324c64de5b80fdf056b63ba2dd0710: Status 404 returned error can't find the container with id e8af9850b2017083afb36e4e0f77410bcb324c64de5b80fdf056b63ba2dd0710 Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.885154 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bn5tw" podStartSLOduration=193.885115976 podStartE2EDuration="3m13.885115976s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:27.88238011 +0000 UTC m=+251.477578379" watchObservedRunningTime="2026-03-21 09:02:27.885115976 +0000 UTC m=+251.480314245" Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.905773 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x"] Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.907467 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:27 crc kubenswrapper[4932]: E0321 09:02:27.907931 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.407917314 +0000 UTC m=+252.003115583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:27 crc kubenswrapper[4932]: I0321 09:02:27.992110 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.008819 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.009170 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.509153698 +0000 UTC m=+252.104351967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.016105 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.091176 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wt26g"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.110512 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.110943 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.6109229 +0000 UTC m=+252.206121169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.149063 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568062-ckbkj"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.156693 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.185979 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" event={"ID":"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4","Type":"ContainerStarted","Data":"cc358b409dfd362f034022ac4e1c8f123c1e6268805d33053b9a6438a8463feb"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.187703 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" event={"ID":"42488cf8-9ce9-4e6f-a6a7-e700926a8627","Type":"ContainerStarted","Data":"f071901c8d69069140cb979c21ae7e611ce05f8326bf8580a9450fc6b4a2d662"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.189494 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" event={"ID":"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7","Type":"ContainerStarted","Data":"25ed343af7f1078e6e095df4264dffd8548e03d186bf7540ab24efe82c0262ef"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.196849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" event={"ID":"64353439-1daf-44b8-bc4d-c91b54936c30","Type":"ContainerStarted","Data":"00d802ede5f6a9a3b9d3b22e765025bc1cf8b149af019395c71d5c2bd0037dad"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.201678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" event={"ID":"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f","Type":"ContainerStarted","Data":"f10b615cba402398ff02ee2c9cf5511dd2a0254d8ebe611421e89bfc4db270d4"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.203823 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" event={"ID":"4092f902-ab2d-4d16-a90e-f0e28265ee00","Type":"ContainerStarted","Data":"3baf02eeb04017d084b6c302207b761ad86c87d91f1f5fb49ffccf715c24726c"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.210214 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" event={"ID":"b875799e-5cf0-4c34-8cd0-e66f896cb3aa","Type":"ContainerStarted","Data":"0f0b50a6041f2e8f0f77d9bc61c733965c31b372a51b8f49f931ea10b233afcf"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.211162 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.211762 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.711746871 +0000 UTC m=+252.306945140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.212753 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" event={"ID":"fa70de2c-7377-4903-9a8b-f889eb315031","Type":"ContainerStarted","Data":"2b86210a2917305d7ffdcecc8a7215f8ed82f9d46cb343799563916a567d501c"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.212780 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" event={"ID":"fa70de2c-7377-4903-9a8b-f889eb315031","Type":"ContainerStarted","Data":"bdedcab1acf897a791fb39eb343892ed90ad44f32d56e17440caa3252754fd13"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.214646 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" event={"ID":"26b21258-5475-4f3e-bc9e-3f5c8e7fc83f","Type":"ContainerStarted","Data":"e31c84fdde64d88e53050ecb925298794114c13549e1f83372cf6fdfcf67d891"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.214850 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.216465 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" event={"ID":"1e37527c-54d4-43e2-be4d-7fb7e98b6019","Type":"ContainerStarted","Data":"a1f0dc7d68b657cd931569e3527459afafd2fdd9ba6f7e7760abd64b6eb7d08e"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.217229 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" event={"ID":"da6e3e5c-4596-49b1-b2ce-15a1dca234b7","Type":"ContainerStarted","Data":"ce43c50065b6f9ec4c772a6a1374be71e68d5c5fad4494a27f703af5eddeda3b"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.218389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" event={"ID":"0a1b4989-60a4-401b-aeec-b1fbb536aef1","Type":"ContainerStarted","Data":"d92c62faca89ddcc4ca9c27dd9effc2d8b5a7fb8ba2491be15c6600a8a47d426"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.222186 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" event={"ID":"400b3f5b-c001-4186-b86b-006d3bda3396","Type":"ContainerStarted","Data":"4c14603532dcd81f010b6ac6488f18cd8404718c64ba440fd8abe7fdfc60b6ec"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.226117 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8n8zb" event={"ID":"0c3468db-1929-4073-af49-e43a543ef0ae","Type":"ContainerStarted","Data":"e8af9850b2017083afb36e4e0f77410bcb324c64de5b80fdf056b63ba2dd0710"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.228880 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v7lxk" event={"ID":"a5e1cc78-be1f-45a2-87b3-73c62790c894","Type":"ContainerStarted","Data":"ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.228949 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v7lxk" event={"ID":"a5e1cc78-be1f-45a2-87b3-73c62790c894","Type":"ContainerStarted","Data":"554085bb79def92c92417e03029964c9e8e451f1ff07a2f19853e6f927adf800"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.231383 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2g9xk" event={"ID":"4387c45a-d8f9-4478-b224-b4e656880aaf","Type":"ContainerStarted","Data":"d8fe0f9937a0dcd9b2a618d3571f25ce0987d6ec01cc3a89b8848aa8253335c1"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.231772 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2g9xk" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.232174 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568060-6lptj" event={"ID":"64d9430a-2f41-4dac-bfa7-9fa47a85db9a","Type":"ContainerStarted","Data":"a83d27cb9248b45acb70811840e27868f76205f4e1db393772b15f93ff7e59dd"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.233259 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" event={"ID":"0b3289d7-beb6-4959-85b3-e2161abd915b","Type":"ContainerStarted","Data":"1d2992ac37dad9760d18fe5a9924be040c0d2f10d2a9b9efa5c14900b5067c29"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.235439 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" event={"ID":"28d77447-b859-4037-b20b-ab6ab1de8d5f","Type":"ContainerStarted","Data":"e177578d0b23978805ec52a0f766a1e21b34f3009f86c789ccce69b5e5e6c6b3"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.238254 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-2g9xk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.238314 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2g9xk" podUID="4387c45a-d8f9-4478-b224-b4e656880aaf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.238661 4932 patch_prober.go:28] interesting pod/console-operator-58897d9998-bpdhb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.238702 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" podUID="26b21258-5475-4f3e-bc9e-3f5c8e7fc83f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.247177 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" event={"ID":"3e2f4747-5e21-4512-8ac6-e7544c5c360f","Type":"ContainerStarted","Data":"ac2ad7f64daabf53a1cb95b9f9f7f13e2b2d20c0fe3e3c4cde1bd2d8fedf21a1"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.248310 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" event={"ID":"1c1a2ec6-38b9-4009-91dd-1242d286b0ff","Type":"ContainerStarted","Data":"df73f62210a75801e0f4868861f693d55c4ee1255fdd125310dc4267f960d149"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.251644 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" event={"ID":"a41fabd1-0161-4897-92cd-7398e84f1f04","Type":"ContainerStarted","Data":"38f9b091af40ce3dda02f53a8bd60782a6d78d2fcbb3a2a00f4045118ca2c631"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.252951 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" event={"ID":"0be2c67e-01c1-4548-b94d-99d4c3b97130","Type":"ContainerStarted","Data":"340bb289f3ef8318a4d0251949990622c0c0e35ce4e858e7ecfeea2ff644c9b9"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.255400 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" event={"ID":"9c992d5e-0530-450e-a359-afe22329d324","Type":"ContainerStarted","Data":"d844b9031ef2c8f03fc9941fb27b3033fd8f5b7cbede4e62dee4a893c36bf905"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.260973 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" event={"ID":"2a62a49d-2dd6-4378-925e-f361279f446c","Type":"ContainerStarted","Data":"f278b0286302d2826dbcd82584df709002a2aae3639ff01ce37731528ba7d95f"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.266174 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" event={"ID":"2953b210-7e39-430a-824e-a1bd46ecff06","Type":"ContainerStarted","Data":"0ce454f25c17c05190d58d5b532bb53d561b570851ae750da5a3915eef0c5405"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.269928 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z8gns" event={"ID":"b870d2fa-ac84-43c4-a552-73ab8a723e12","Type":"ContainerStarted","Data":"894e58151b53e1101965a7a038d57d431d0aaecbafe5d4498b8fdca73c0d0397"} Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.276268 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.276782 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.293427 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" podStartSLOduration=194.29339766 podStartE2EDuration="3m14.29339766s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:28.288044642 +0000 UTC m=+251.883242911" watchObservedRunningTime="2026-03-21 09:02:28.29339766 +0000 UTC m=+251.888595929" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.314176 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.320556 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.820507653 +0000 UTC m=+252.415706112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.331821 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-524qf"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.337071 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wpxg"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.397513 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" podStartSLOduration=194.397477665 podStartE2EDuration="3m14.397477665s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:28.383012319 +0000 UTC m=+251.978210608" watchObservedRunningTime="2026-03-21 09:02:28.397477665 +0000 UTC m=+251.992675934" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.398714 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4blxs"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.417183 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.417757 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:28.917737091 +0000 UTC m=+252.512935370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.520082 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.521025 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.02099786 +0000 UTC m=+252.616196129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.521150 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kl6dt"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.628884 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.629749 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.129726811 +0000 UTC m=+252.724925080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.731597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.731840 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.231821292 +0000 UTC m=+252.827019561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.749552 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-v7lxk" podStartSLOduration=194.749529339 podStartE2EDuration="3m14.749529339s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:28.745261535 +0000 UTC m=+252.340459814" watchObservedRunningTime="2026-03-21 09:02:28.749529339 +0000 UTC m=+252.344727608" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.752033 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-945r8"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.789207 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2g9xk" podStartSLOduration=194.789179697 podStartE2EDuration="3m14.789179697s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:28.781056661 +0000 UTC m=+252.376254930" watchObservedRunningTime="2026-03-21 09:02:28.789179697 +0000 UTC m=+252.384377976" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.820576 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g4ft" podStartSLOduration=194.820551864 podStartE2EDuration="3m14.820551864s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:28.819942295 +0000 UTC m=+252.415140574" watchObservedRunningTime="2026-03-21 09:02:28.820551864 +0000 UTC m=+252.415750123" Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.833023 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.833304 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.333271544 +0000 UTC m=+252.928469823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.833444 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.833954 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.333945885 +0000 UTC m=+252.929144154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.842001 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bd9lw"] Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.945242 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.945507 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.445476914 +0000 UTC m=+253.040675193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.945986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:28 crc kubenswrapper[4932]: E0321 09:02:28.946335 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.44632189 +0000 UTC m=+253.041520159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:28 crc kubenswrapper[4932]: I0321 09:02:28.947290 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-764x4" podStartSLOduration=194.94727692 podStartE2EDuration="3m14.94727692s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:28.944991888 +0000 UTC m=+252.540190167" watchObservedRunningTime="2026-03-21 09:02:28.94727692 +0000 UTC m=+252.542475179" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.049000 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.049889 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.549870038 +0000 UTC m=+253.145068307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.058866 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" podStartSLOduration=195.058836299 podStartE2EDuration="3m15.058836299s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:29.050790676 +0000 UTC m=+252.645988945" watchObservedRunningTime="2026-03-21 09:02:29.058836299 +0000 UTC m=+252.654034568" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.104008 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7htc" podStartSLOduration=195.10398639 podStartE2EDuration="3m15.10398639s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:29.101523262 +0000 UTC m=+252.696721531" watchObservedRunningTime="2026-03-21 09:02:29.10398639 +0000 UTC m=+252.699184659" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.151377 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.151810 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.651793044 +0000 UTC m=+253.246991313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.193458 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" podStartSLOduration=195.193412843 podStartE2EDuration="3m15.193412843s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:29.18948113 +0000 UTC m=+252.784679409" watchObservedRunningTime="2026-03-21 09:02:29.193412843 +0000 UTC m=+252.788611112" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.252824 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.253254 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.753223575 +0000 UTC m=+253.348421844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.268675 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.269260 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.769247188 +0000 UTC m=+253.364445457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.353457 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" event={"ID":"cc4faed3-5d31-4c1d-bca1-140b12b1ec30","Type":"ContainerStarted","Data":"dc03c78e49aa81ac180d6d90db13d1e8b2c63b7b00b078137ba48e8ac5d95aad"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.360401 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" event={"ID":"026bb1a2-7881-45a8-8845-53d8bbcb4166","Type":"ContainerStarted","Data":"489cae96f1c983e4c7d97827ecbf5c2e538304608026e5eb1f12aaa484037b4f"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.374970 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.377860 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.877829235 +0000 UTC m=+253.473027504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.453307 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" event={"ID":"6c434bdc-13ee-4405-9cba-fc4f7c18c659","Type":"ContainerStarted","Data":"65bd3047b9fb95fdf2e3b6752f117c0948ef7d0491fef904e78b3c0056851dea"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.453379 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" event={"ID":"6c434bdc-13ee-4405-9cba-fc4f7c18c659","Type":"ContainerStarted","Data":"ae7c4696630cf397ae38c1465ead1def3907f8c572356e6fb149ad416f1df67d"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.483972 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.484447 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:29.984433518 +0000 UTC m=+253.579631777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.493316 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" event={"ID":"a41fabd1-0161-4897-92cd-7398e84f1f04","Type":"ContainerStarted","Data":"3b0af65d2236f723cb1916d002443e82da35800ac20718694968b5c5a02d89fb"} Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.589762 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.089732291 +0000 UTC m=+253.684930550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.593062 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" event={"ID":"dad9bd53-9063-4f82-91e9-7c4563696223","Type":"ContainerStarted","Data":"a23e47631421bc2ebc68e1de530497d5d560e055f8df1cbf12f7bcb99482b179"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.595450 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.596061 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.598053 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.098023362 +0000 UTC m=+253.693221811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.640193 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" event={"ID":"1c1a2ec6-38b9-4009-91dd-1242d286b0ff","Type":"ContainerStarted","Data":"71a226bf44d4fedb252486ff68b6b74f08e7813d1e1cf91a21fd0c9a42af90c9"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.643153 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.674979 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" event={"ID":"eb4e7142-4148-4ebc-864d-7f7c6cfbf237","Type":"ContainerStarted","Data":"3e2c80929feeaba227c32bd462f489eac8696132b1fed640ebdabe6c07de2ffe"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.675814 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.676490 4932 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-th6sl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.676572 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" podUID="1c1a2ec6-38b9-4009-91dd-1242d286b0ff" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.677754 4932 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6wpxg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.677797 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" podUID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.683854 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" event={"ID":"0a1b4989-60a4-401b-aeec-b1fbb536aef1","Type":"ContainerStarted","Data":"2b6d4d469c48a766daa0cda85984d6194884ddf63151583628e1eb290f137206"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.696912 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.697267 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.197235352 +0000 UTC m=+253.792433621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.701147 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" event={"ID":"2fa9930b-f9e4-4466-b47c-13d7c3e9cdf4","Type":"ContainerStarted","Data":"5792f4bc7e8d40556f9649b9712c98d3cb73ce54a38bdc53b0aa8452c6365525"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.743168 4932 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-jqqm4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.743600 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" podUID="42488cf8-9ce9-4e6f-a6a7-e700926a8627" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.751064 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" event={"ID":"18878259-949d-4a7b-886e-cd390bd5f74e","Type":"ContainerStarted","Data":"cc8f65a527ee626e2ac7b1bf3d5037e9dc6ad9d09f8fda8abbdbe3df0699c751"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.751119 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" event={"ID":"42488cf8-9ce9-4e6f-a6a7-e700926a8627","Type":"ContainerStarted","Data":"2ade6329e16bb43fbb657998d26baed85d3092820dec430f888278a315d176a8"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.751149 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.756161 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kl6dt" event={"ID":"e980bce8-8b4c-4e96-b7a8-7b5465cfeae4","Type":"ContainerStarted","Data":"70cd64d9abaaf3e0a9a13b0fc46c9cae2938a2385f4069af9d831ff803e01d49"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.776656 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" event={"ID":"4092f902-ab2d-4d16-a90e-f0e28265ee00","Type":"ContainerStarted","Data":"75b720fd9fff014c822771dd6a4c985939181362892d17392d88ad5c609b3f23"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.799017 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.802751 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.302737332 +0000 UTC m=+253.897935601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.824492 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" podStartSLOduration=195.824473375 podStartE2EDuration="3m15.824473375s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:29.823121862 +0000 UTC m=+253.418320131" watchObservedRunningTime="2026-03-21 09:02:29.824473375 +0000 UTC m=+253.419671644" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.824583 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8n8zb" event={"ID":"0c3468db-1929-4073-af49-e43a543ef0ae","Type":"ContainerStarted","Data":"be3dba3ee1881345c0b97919c284a6ec8acb5966d58a66bf9e75bad9d4e4b875"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.851029 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.851926 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.882042 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-94dz9" podStartSLOduration=195.882009985 podStartE2EDuration="3m15.882009985s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:29.881722447 +0000 UTC m=+253.476920716" watchObservedRunningTime="2026-03-21 09:02:29.882009985 +0000 UTC m=+253.477208264" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.910141 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:29 crc kubenswrapper[4932]: E0321 09:02:29.911257 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.411240555 +0000 UTC m=+254.006438824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.944950 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nnhb7" podStartSLOduration=195.944911114 podStartE2EDuration="3m15.944911114s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:29.939496844 +0000 UTC m=+253.534695133" watchObservedRunningTime="2026-03-21 09:02:29.944911114 +0000 UTC m=+253.540109383" Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.955991 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" event={"ID":"ff789aae-664d-4298-8122-f1d0751118b8","Type":"ContainerStarted","Data":"721eb2af07b7be86f5ca24a9cab7490b8ca320c431cedb9b1ffc65cb467845a3"} Mar 21 09:02:29 crc kubenswrapper[4932]: I0321 09:02:29.972625 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" podStartSLOduration=195.972585725 podStartE2EDuration="3m15.972585725s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:29.971904223 +0000 UTC m=+253.567102512" watchObservedRunningTime="2026-03-21 09:02:29.972585725 +0000 UTC m=+253.567783994" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.014263 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.014654 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.514642528 +0000 UTC m=+254.109840797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.015657 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" podStartSLOduration=196.01564466 podStartE2EDuration="3m16.01564466s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.014996809 +0000 UTC m=+253.610195098" watchObservedRunningTime="2026-03-21 09:02:30.01564466 +0000 UTC m=+253.610842929" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.035610 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z8gns" event={"ID":"b870d2fa-ac84-43c4-a552-73ab8a723e12","Type":"ContainerStarted","Data":"0e66e616421af49f2f026ec5131364df04f90e59a0cffc330fd5f9f3b3b80049"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.064252 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-524qf" event={"ID":"341989e4-9f64-45f9-85f7-55bb89ca0447","Type":"ContainerStarted","Data":"5c6d13602f1ddb3c05f3980f7286fb728861447577d154e4a01d6ff40a985bca"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.085340 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" event={"ID":"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7","Type":"ContainerStarted","Data":"bde822d31bca54040fed08c0f72b83f4a37de71d3cffb67170bf318069671a9f"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.104843 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" podStartSLOduration=196.104808135 podStartE2EDuration="3m16.104808135s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.086718255 +0000 UTC m=+253.681916524" watchObservedRunningTime="2026-03-21 09:02:30.104808135 +0000 UTC m=+253.700006404" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.112649 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" event={"ID":"84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f","Type":"ContainerStarted","Data":"1752fcc36d6529af807216cb4755b5e68fb6aaa66c1470b765c29cd6305db664"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.114301 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.114787 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.116118 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.61610323 +0000 UTC m=+254.211301499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.118681 4932 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jhx9z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.118742 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" podUID="84ac4bf8-d0b3-414e-9a9d-92e1a7874f4f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.118912 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" event={"ID":"e61399d9-1d7d-4109-b634-3bbcdad81d2a","Type":"ContainerStarted","Data":"a8f4ffd0e0f7e8865536ca85f6ef67f1a60e3dafa0cc5cb809c0567cab2e6015"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.121022 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" event={"ID":"a1dbcfef-3656-4dac-82a6-78cc48d655df","Type":"ContainerStarted","Data":"1a67ec2d33f39e3e69bf3d01c9fca72419f9ac014d7bf9ac6411120052cc68c6"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.165169 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c992d5e-0530-450e-a359-afe22329d324" containerID="d844b9031ef2c8f03fc9941fb27b3033fd8f5b7cbede4e62dee4a893c36bf905" exitCode=0 Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.165307 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" event={"ID":"9c992d5e-0530-450e-a359-afe22329d324","Type":"ContainerDied","Data":"d844b9031ef2c8f03fc9941fb27b3033fd8f5b7cbede4e62dee4a893c36bf905"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.166765 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.169504 4932 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xlbr5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]log ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]etcd ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/generic-apiserver-start-informers ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/max-in-flight-filter ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 21 09:02:30 crc kubenswrapper[4932]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 21 09:02:30 crc kubenswrapper[4932]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/project.openshift.io-projectcache ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/openshift.io-startinformers ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 21 09:02:30 crc kubenswrapper[4932]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 21 09:02:30 crc kubenswrapper[4932]: livez check failed Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.169570 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" podUID="28d77447-b859-4037-b20b-ab6ab1de8d5f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.187997 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-945r8" event={"ID":"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b","Type":"ContainerStarted","Data":"2fd36e5a7c78a5229f94fd80ae5644897d27d6b3b7690e53610de0c6cda5dcb2"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.209730 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8n8zb" podStartSLOduration=7.209709564 podStartE2EDuration="7.209709564s" podCreationTimestamp="2026-03-21 09:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.208959031 +0000 UTC m=+253.804157300" watchObservedRunningTime="2026-03-21 09:02:30.209709564 +0000 UTC m=+253.804907833" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.214072 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" event={"ID":"8309fb7a-e364-4f9b-a723-a0d4926a0a51","Type":"ContainerStarted","Data":"04e9d788f6208c092d22d09404f8978b9ab47af9949bff9d41b3df5e7c1c2598"} Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.216117 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.218313 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.718295904 +0000 UTC m=+254.313494173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.226967 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q6jcs" podStartSLOduration=196.212337527 podStartE2EDuration="3m16.212337527s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.139887848 +0000 UTC m=+253.735086117" watchObservedRunningTime="2026-03-21 09:02:30.212337527 +0000 UTC m=+253.807535796" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.228553 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.228614 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.262311 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-z8gns" podStartSLOduration=196.262288359 podStartE2EDuration="3m16.262288359s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.261780983 +0000 UTC m=+253.856979252" watchObservedRunningTime="2026-03-21 09:02:30.262288359 +0000 UTC m=+253.857486638" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.271056 4932 patch_prober.go:28] interesting pod/console-operator-58897d9998-bpdhb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.271112 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" podUID="26b21258-5475-4f3e-bc9e-3f5c8e7fc83f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.272493 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-2g9xk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.272520 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2g9xk" podUID="4387c45a-d8f9-4478-b224-b4e656880aaf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.296756 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-524qf" podStartSLOduration=196.296734792 podStartE2EDuration="3m16.296734792s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.295212494 +0000 UTC m=+253.890410763" watchObservedRunningTime="2026-03-21 09:02:30.296734792 +0000 UTC m=+253.891933061" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.317135 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.323100 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.823080581 +0000 UTC m=+254.418278860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.330892 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" podStartSLOduration=196.330874206 podStartE2EDuration="3m16.330874206s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.328549803 +0000 UTC m=+253.923748072" watchObservedRunningTime="2026-03-21 09:02:30.330874206 +0000 UTC m=+253.926072475" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.378535 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" podStartSLOduration=196.378508535 podStartE2EDuration="3m16.378508535s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.370289526 +0000 UTC m=+253.965487795" watchObservedRunningTime="2026-03-21 09:02:30.378508535 +0000 UTC m=+253.973706804" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.399745 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" podStartSLOduration=150.399722063 podStartE2EDuration="2m30.399722063s" podCreationTimestamp="2026-03-21 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.398610927 +0000 UTC m=+253.993809186" watchObservedRunningTime="2026-03-21 09:02:30.399722063 +0000 UTC m=+253.994920332" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.421822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.422263 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:30.92224621 +0000 UTC m=+254.517444479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.423561 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53000: no serving certificate available for the kubelet" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.425297 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rqlxn" podStartSLOduration=196.425274935 podStartE2EDuration="3m16.425274935s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.423968345 +0000 UTC m=+254.019166634" watchObservedRunningTime="2026-03-21 09:02:30.425274935 +0000 UTC m=+254.020473204" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.484754 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.525923 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.527552 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.027535943 +0000 UTC m=+254.622734212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.554609 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53008: no serving certificate available for the kubelet" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.601280 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" podStartSLOduration=196.601255902 podStartE2EDuration="3m16.601255902s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:30.460124922 +0000 UTC m=+254.055323201" watchObservedRunningTime="2026-03-21 09:02:30.601255902 +0000 UTC m=+254.196454171" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.633001 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.633735 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.133721153 +0000 UTC m=+254.728919422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.700734 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53018: no serving certificate available for the kubelet" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.733995 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.734673 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.234611837 +0000 UTC m=+254.829810106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.822091 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53028: no serving certificate available for the kubelet" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.838423 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.839102 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.339081924 +0000 UTC m=+254.934280193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.903430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.903507 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.912595 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53044: no serving certificate available for the kubelet" Mar 21 09:02:30 crc kubenswrapper[4932]: I0321 09:02:30.940696 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:30 crc kubenswrapper[4932]: E0321 09:02:30.941162 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.441144574 +0000 UTC m=+255.036342843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.015863 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53048: no serving certificate available for the kubelet" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.027511 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.043652 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.044072 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.544060672 +0000 UTC m=+255.139258931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.044532 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:31 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:31 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:31 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.044561 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.139031 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53062: no serving certificate available for the kubelet" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.145052 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.145284 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.645252285 +0000 UTC m=+255.240450564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.145555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.146089 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.646079041 +0000 UTC m=+255.241277310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.247427 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.247666 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.747633847 +0000 UTC m=+255.342832116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.247813 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.248183 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.748175703 +0000 UTC m=+255.343373972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.292985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" event={"ID":"8309fb7a-e364-4f9b-a723-a0d4926a0a51","Type":"ContainerStarted","Data":"024919ca6cc75d54fd07cac05a2e2869d6d1dd7301ca35177f4e73ea829a16c2"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.305011 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-524qf" event={"ID":"341989e4-9f64-45f9-85f7-55bb89ca0447","Type":"ContainerStarted","Data":"0ff0890daff2fa7693000a8ec362b69617a74b2008e26cebeccabc34b65e4436"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.321399 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53076: no serving certificate available for the kubelet" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.343431 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" event={"ID":"ff789aae-664d-4298-8122-f1d0751118b8","Type":"ContainerStarted","Data":"777727125f3c772f141b11245c8297a9d6bf6e55804a2d65573420a17e87eccc"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.344025 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" event={"ID":"ff789aae-664d-4298-8122-f1d0751118b8","Type":"ContainerStarted","Data":"e280dfaf00990d461c0e291d42efa919e1c88017facdc93af74fedd986a43907"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.349078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.350487 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.850470581 +0000 UTC m=+255.445668850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.360998 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" event={"ID":"da6e3e5c-4596-49b1-b2ce-15a1dca234b7","Type":"ContainerStarted","Data":"5feec42d1a94901f45e226c3282360c4f1b85484d0cae1d642d48b3d174c068f"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.378689 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r2clz" podStartSLOduration=197.378665188 podStartE2EDuration="3m17.378665188s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.377942005 +0000 UTC m=+254.973140274" watchObservedRunningTime="2026-03-21 09:02:31.378665188 +0000 UTC m=+254.973863447" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.387647 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lbpfm" event={"ID":"1e37527c-54d4-43e2-be4d-7fb7e98b6019","Type":"ContainerStarted","Data":"eaeb707871cc3105d61c50205d9274b9d4a7190f47306307faa2cb450018913c"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.400189 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" event={"ID":"6c434bdc-13ee-4405-9cba-fc4f7c18c659","Type":"ContainerStarted","Data":"b52d05af927d2003669bb4933cc8185daf002a79d21d0c1cf398cef7c14247e3"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.415437 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" event={"ID":"0a1b4989-60a4-401b-aeec-b1fbb536aef1","Type":"ContainerStarted","Data":"fee2cb20c13d5efce4974c7c1d26b40e881b81fbc10680215f6dcf57db1917a9"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.435685 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" event={"ID":"400b3f5b-c001-4186-b86b-006d3bda3396","Type":"ContainerStarted","Data":"8f3f4e7aabdb1b440c4627463d2b0663c0e594fbfd7d83d0ee1a9db690dfde93"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.440480 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" event={"ID":"18878259-949d-4a7b-886e-cd390bd5f74e","Type":"ContainerStarted","Data":"792951c2301cf4a6f9902da657a00e2a6aca53b57adfe684ce6422c76f252dc4"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.449004 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kl6dt" event={"ID":"e980bce8-8b4c-4e96-b7a8-7b5465cfeae4","Type":"ContainerStarted","Data":"2ad671b8c8e393778fdca329ccd0a8083493d15b693a0c42626861d15b3aa569"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.452470 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.454869 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:31.954850085 +0000 UTC m=+255.550048354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.483426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" event={"ID":"3e2f4747-5e21-4512-8ac6-e7544c5c360f","Type":"ContainerStarted","Data":"6dc50f5c6f468f4ddc7b8f003ce970f3ba72247e5bd787003ddeec806d816d89"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.490533 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4t2nv" podStartSLOduration=197.490506277 podStartE2EDuration="3m17.490506277s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.4861888 +0000 UTC m=+255.081387069" watchObservedRunningTime="2026-03-21 09:02:31.490506277 +0000 UTC m=+255.085704546" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.496214 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.500385 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" event={"ID":"0b3289d7-beb6-4959-85b3-e2161abd915b","Type":"ContainerStarted","Data":"3081057ef3ff6ee7c5b651bd28fce664322cfec93ab77a03e630fc8779a8f92e"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.500444 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" event={"ID":"0b3289d7-beb6-4959-85b3-e2161abd915b","Type":"ContainerStarted","Data":"ef900b31d081eb47bed4c667f7be3f63d9e384259e5cb5b98a6012b2e8a1ecf2"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.501173 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.513009 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" event={"ID":"2bd26007-3124-4ad1-b3e9-9f875f8e7bc7","Type":"ContainerStarted","Data":"927584847eecb8f6332d5362a30d8fa6a887011287afc53abee9691a4cc757a0"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.547281 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" event={"ID":"0be2c67e-01c1-4548-b94d-99d4c3b97130","Type":"ContainerStarted","Data":"83943bdd98bcabc5a43f2d85d9e1f85704a6b3f98ed7df628181ea84b5297a53"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.554129 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.556084 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.056037489 +0000 UTC m=+255.651235758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.568772 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" event={"ID":"a1dbcfef-3656-4dac-82a6-78cc48d655df","Type":"ContainerStarted","Data":"519d220e44ad0da68d36e265f1a9e01040eee944095af69b2f62938d5e833d78"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.570414 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.571957 4932 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wt26g container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.572000 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" podUID="a1dbcfef-3656-4dac-82a6-78cc48d655df" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.593951 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4blxs" podStartSLOduration=197.5939252 podStartE2EDuration="3m17.5939252s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.533801848 +0000 UTC m=+255.129000117" watchObservedRunningTime="2026-03-21 09:02:31.5939252 +0000 UTC m=+255.189123469" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.594157 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mtbkl" podStartSLOduration=197.594152617 podStartE2EDuration="3m17.594152617s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.592911138 +0000 UTC m=+255.188109427" watchObservedRunningTime="2026-03-21 09:02:31.594152617 +0000 UTC m=+255.189350876" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.604250 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" event={"ID":"9c992d5e-0530-450e-a359-afe22329d324","Type":"ContainerStarted","Data":"2cda214a0ceefa58770e684dbe638a0a68c2223624a1add4fc1110beadbf4761"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.624542 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zwq9w" podStartSLOduration=197.624521353 podStartE2EDuration="3m17.624521353s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.624264024 +0000 UTC m=+255.219462293" watchObservedRunningTime="2026-03-21 09:02:31.624521353 +0000 UTC m=+255.219719622" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.626067 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-945r8" event={"ID":"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b","Type":"ContainerStarted","Data":"5166999c973e7f4cbe0bb407587b8cb68b12c68e63c443df7000b90f319a89ee"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.646083 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" event={"ID":"2a62a49d-2dd6-4378-925e-f361279f446c","Type":"ContainerStarted","Data":"b923a15765bac0b8734425779767bc4995b81c55f1f961ef951126c02c3cc0ab"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.648785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" event={"ID":"eb4e7142-4148-4ebc-864d-7f7c6cfbf237","Type":"ContainerStarted","Data":"7d903091095909d0e0f8ac4008c8097f42862b4198451d6f17751f0932e04682"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.650121 4932 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6wpxg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.650210 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" podUID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.665246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.666458 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.166436221 +0000 UTC m=+255.761634490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.668695 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kl6dt" podStartSLOduration=8.668666111 podStartE2EDuration="8.668666111s" podCreationTimestamp="2026-03-21 09:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.665331146 +0000 UTC m=+255.260529415" watchObservedRunningTime="2026-03-21 09:02:31.668666111 +0000 UTC m=+255.263864380" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.747693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" event={"ID":"cc4faed3-5d31-4c1d-bca1-140b12b1ec30","Type":"ContainerStarted","Data":"aa505010a56b41bd73da7917e11288f1be747c23fa06bb7e26516851c90db6fa"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.748238 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" event={"ID":"cc4faed3-5d31-4c1d-bca1-140b12b1ec30","Type":"ContainerStarted","Data":"8b478a377a4114340fa4c8f7ed69a532fc2fe938e61c5ed1df6d79ee32545492"} Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.748294 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ph6bm" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.749790 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jhx9z" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.753605 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wxwjq" podStartSLOduration=197.753580623 podStartE2EDuration="3m17.753580623s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.699752119 +0000 UTC m=+255.294950378" watchObservedRunningTime="2026-03-21 09:02:31.753580623 +0000 UTC m=+255.348778892" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.755464 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th6sl" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.767565 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.769329 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.269304557 +0000 UTC m=+255.864502826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.773512 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4h8rg" podStartSLOduration=197.773485489 podStartE2EDuration="3m17.773485489s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.738770346 +0000 UTC m=+255.333968615" watchObservedRunningTime="2026-03-21 09:02:31.773485489 +0000 UTC m=+255.368683758" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.814769 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" podStartSLOduration=197.814750797 podStartE2EDuration="3m17.814750797s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.813880549 +0000 UTC m=+255.409078838" watchObservedRunningTime="2026-03-21 09:02:31.814750797 +0000 UTC m=+255.409949066" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.871469 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.871938 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.371918526 +0000 UTC m=+255.967116795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.879463 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rsfv" podStartSLOduration=197.879418111 podStartE2EDuration="3m17.879418111s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.878505142 +0000 UTC m=+255.473703431" watchObservedRunningTime="2026-03-21 09:02:31.879418111 +0000 UTC m=+255.474616370" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.945586 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qlsc5" podStartSLOduration=197.945554561 podStartE2EDuration="3m17.945554561s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.932367957 +0000 UTC m=+255.527566226" watchObservedRunningTime="2026-03-21 09:02:31.945554561 +0000 UTC m=+255.540752830" Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.985800 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:31 crc kubenswrapper[4932]: E0321 09:02:31.986252 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.486237852 +0000 UTC m=+256.081436121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:31 crc kubenswrapper[4932]: I0321 09:02:31.995518 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" podStartSLOduration=197.995489813 podStartE2EDuration="3m17.995489813s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:31.976100942 +0000 UTC m=+255.571299211" watchObservedRunningTime="2026-03-21 09:02:31.995489813 +0000 UTC m=+255.590688082" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.029860 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53084: no serving certificate available for the kubelet" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.034512 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:32 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:32 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:32 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.034567 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.091144 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.091606 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.591594756 +0000 UTC m=+256.186793025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.106373 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmpnv" podStartSLOduration=198.10633881 podStartE2EDuration="3m18.10633881s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:32.06470241 +0000 UTC m=+255.659900679" watchObservedRunningTime="2026-03-21 09:02:32.10633881 +0000 UTC m=+255.701537069" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.191957 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.192460 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.692441068 +0000 UTC m=+256.287639327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.200913 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jqqm4" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.210234 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-szhmn" podStartSLOduration=198.210214118 podStartE2EDuration="3m18.210214118s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:32.106946729 +0000 UTC m=+255.702144998" watchObservedRunningTime="2026-03-21 09:02:32.210214118 +0000 UTC m=+255.805412387" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.296742 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.297660 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.797644339 +0000 UTC m=+256.392842608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.399995 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.400237 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.900198134 +0000 UTC m=+256.495396403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.400319 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.400939 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:32.900931248 +0000 UTC m=+256.496129517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.502108 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.502528 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.002511562 +0000 UTC m=+256.597709831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.605178 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.605792 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.105767262 +0000 UTC m=+256.700965531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.706375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.706593 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.206554642 +0000 UTC m=+256.801752911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.706745 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.707065 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.207052327 +0000 UTC m=+256.802250596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.738876 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" event={"ID":"e61399d9-1d7d-4109-b634-3bbcdad81d2a","Type":"ContainerStarted","Data":"8780d7e55d4ffe1fd6ba26743dbf627e57a426afbbe7ed20210b7835687a4f0f"} Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.749165 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-945r8" event={"ID":"3c8ae130-bd13-49ec-a21c-bb7a4c022a8b","Type":"ContainerStarted","Data":"1466ca1b6d1ddbf531fc4597dc0630bd699bbcf2c37ebe8fadb564de7d263335"} Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.751085 4932 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6wpxg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.751124 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" podUID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.793610 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gx64c"] Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.794660 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.802037 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.802664 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-945r8" podStartSLOduration=9.802639844 podStartE2EDuration="9.802639844s" podCreationTimestamp="2026-03-21 09:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:32.801589262 +0000 UTC m=+256.396787541" watchObservedRunningTime="2026-03-21 09:02:32.802639844 +0000 UTC m=+256.397838113" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.808005 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.808389 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.308368625 +0000 UTC m=+256.903566884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.839094 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gx64c"] Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.909238 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.909551 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-catalog-content\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:32 crc kubenswrapper[4932]: E0321 09:02:32.909853 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.409827176 +0000 UTC m=+257.005025445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.910052 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hhw\" (UniqueName: \"kubernetes.io/projected/4b4f0982-25bd-4a81-b00f-7b35377a893a-kube-api-access-x8hhw\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.910504 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-utilities\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.964699 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jbmnj"] Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.965773 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.968227 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 21 09:02:32 crc kubenswrapper[4932]: I0321 09:02:32.998394 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbmnj"] Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.012021 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.014255 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.51420895 +0000 UTC m=+257.109407219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.016682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hhw\" (UniqueName: \"kubernetes.io/projected/4b4f0982-25bd-4a81-b00f-7b35377a893a-kube-api-access-x8hhw\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.016759 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-utilities\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.016833 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.016890 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-catalog-content\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.017471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-catalog-content\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.020335 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-utilities\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.020489 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.520461396 +0000 UTC m=+257.115659665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.033688 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8ffkm"] Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.034705 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" podUID="d869e800-3cce-4e0d-b822-158858ee632b" containerName="controller-manager" containerID="cri-o://4d534460872774905ff8ca893964eab4b655f60474cb295b884808dc476c3cfe" gracePeriod=30 Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.044702 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.058571 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:33 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:33 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:33 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.058720 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.075182 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hhw\" (UniqueName: \"kubernetes.io/projected/4b4f0982-25bd-4a81-b00f-7b35377a893a-kube-api-access-x8hhw\") pod \"certified-operators-gx64c\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.117284 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s"] Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.117699 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" podUID="7a022e41-0a7d-4ab3-bcce-c404160d00c9" containerName="route-controller-manager" containerID="cri-o://8209e9cddd8cc4231930316e06f635c24d8af46405f07407f81968eb0deeea93" gracePeriod=30 Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.118426 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.118819 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.119977 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.619949567 +0000 UTC m=+257.215147836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.120042 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.120081 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz6dt\" (UniqueName: \"kubernetes.io/projected/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-kube-api-access-fz6dt\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.120138 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-catalog-content\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.120188 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-utilities\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.120557 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.620549986 +0000 UTC m=+257.215748245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.222375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.222645 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz6dt\" (UniqueName: \"kubernetes.io/projected/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-kube-api-access-fz6dt\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.222713 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-catalog-content\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.222755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-utilities\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.223308 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-utilities\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.223406 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.723388821 +0000 UTC m=+257.318587090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.223760 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-945r8" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.223984 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-catalog-content\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.254591 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz6dt\" (UniqueName: \"kubernetes.io/projected/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-kube-api-access-fz6dt\") pod \"community-operators-jbmnj\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.259753 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n7ztm"] Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.260827 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.295550 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7ztm"] Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.299661 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.324181 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-utilities\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.324221 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-catalog-content\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.324325 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2lqj\" (UniqueName: \"kubernetes.io/projected/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-kube-api-access-v2lqj\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.324365 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.324728 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.824713258 +0000 UTC m=+257.419911527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.373853 4932 ???:1] "http: TLS handshake error from 192.168.126.11:53086: no serving certificate available for the kubelet" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.389116 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6ctps"] Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.390237 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.403879 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ctps"] Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.426494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.426804 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2lqj\" (UniqueName: \"kubernetes.io/projected/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-kube-api-access-v2lqj\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.426849 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-utilities\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.426872 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-catalog-content\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.427318 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-catalog-content\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.428295 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-utilities\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.428824 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:33.928796443 +0000 UTC m=+257.523994712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.482830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2lqj\" (UniqueName: \"kubernetes.io/projected/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-kube-api-access-v2lqj\") pod \"certified-operators-n7ztm\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.527987 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-catalog-content\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.528085 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4lk\" (UniqueName: \"kubernetes.io/projected/2b21532b-ba09-4bcb-b030-eaad36e4ba20-kube-api-access-8n4lk\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.528135 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-utilities\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.528189 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.528550 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.02853639 +0000 UTC m=+257.623734659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.625506 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.628627 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.628850 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-utilities\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.628912 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.128878027 +0000 UTC m=+257.724076296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.628969 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.629156 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-catalog-content\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.629307 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4lk\" (UniqueName: \"kubernetes.io/projected/2b21532b-ba09-4bcb-b030-eaad36e4ba20-kube-api-access-8n4lk\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.629308 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-utilities\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.629566 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-catalog-content\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.629589 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.129573729 +0000 UTC m=+257.724771998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.651189 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4lk\" (UniqueName: \"kubernetes.io/projected/2b21532b-ba09-4bcb-b030-eaad36e4ba20-kube-api-access-8n4lk\") pod \"community-operators-6ctps\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.719919 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.731187 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.731998 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.23198115 +0000 UTC m=+257.827179419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.784473 4932 generic.go:334] "Generic (PLEG): container finished" podID="d869e800-3cce-4e0d-b822-158858ee632b" containerID="4d534460872774905ff8ca893964eab4b655f60474cb295b884808dc476c3cfe" exitCode=0 Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.784554 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" event={"ID":"d869e800-3cce-4e0d-b822-158858ee632b","Type":"ContainerDied","Data":"4d534460872774905ff8ca893964eab4b655f60474cb295b884808dc476c3cfe"} Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.787739 4932 generic.go:334] "Generic (PLEG): container finished" podID="7a022e41-0a7d-4ab3-bcce-c404160d00c9" containerID="8209e9cddd8cc4231930316e06f635c24d8af46405f07407f81968eb0deeea93" exitCode=0 Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.788605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" event={"ID":"7a022e41-0a7d-4ab3-bcce-c404160d00c9","Type":"ContainerDied","Data":"8209e9cddd8cc4231930316e06f635c24d8af46405f07407f81968eb0deeea93"} Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.847620 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.848141 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.348128614 +0000 UTC m=+257.943326883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.948857 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.948990 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.448970177 +0000 UTC m=+258.044168446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.949036 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:33 crc kubenswrapper[4932]: E0321 09:02:33.949464 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.449455182 +0000 UTC m=+258.044653451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:33 crc kubenswrapper[4932]: I0321 09:02:33.955887 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.042512 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:34 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:34 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:34 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.042578 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.049513 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.049554 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dh7j\" (UniqueName: \"kubernetes.io/projected/7a022e41-0a7d-4ab3-bcce-c404160d00c9-kube-api-access-9dh7j\") pod \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.049585 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a022e41-0a7d-4ab3-bcce-c404160d00c9-serving-cert\") pod \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.049617 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-client-ca\") pod \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.049655 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-config\") pod \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\" (UID: \"7a022e41-0a7d-4ab3-bcce-c404160d00c9\") " Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.050282 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.550244073 +0000 UTC m=+258.145442352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.050958 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a022e41-0a7d-4ab3-bcce-c404160d00c9" (UID: "7a022e41-0a7d-4ab3-bcce-c404160d00c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.051247 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-config" (OuterVolumeSpecName: "config") pod "7a022e41-0a7d-4ab3-bcce-c404160d00c9" (UID: "7a022e41-0a7d-4ab3-bcce-c404160d00c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.072843 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a022e41-0a7d-4ab3-bcce-c404160d00c9-kube-api-access-9dh7j" (OuterVolumeSpecName: "kube-api-access-9dh7j") pod "7a022e41-0a7d-4ab3-bcce-c404160d00c9" (UID: "7a022e41-0a7d-4ab3-bcce-c404160d00c9"). InnerVolumeSpecName "kube-api-access-9dh7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.073593 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a022e41-0a7d-4ab3-bcce-c404160d00c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a022e41-0a7d-4ab3-bcce-c404160d00c9" (UID: "7a022e41-0a7d-4ab3-bcce-c404160d00c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.136399 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.169710 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gx64c"] Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.174059 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.174224 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dh7j\" (UniqueName: \"kubernetes.io/projected/7a022e41-0a7d-4ab3-bcce-c404160d00c9-kube-api-access-9dh7j\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.174238 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a022e41-0a7d-4ab3-bcce-c404160d00c9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.174247 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.174261 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a022e41-0a7d-4ab3-bcce-c404160d00c9-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.174583 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.674568003 +0000 UTC m=+258.269766272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: W0321 09:02:34.214864 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b4f0982_25bd_4a81_b00f_7b35377a893a.slice/crio-f1f35476a21abac78b8fc8131f04ba21909aa8a27296dceebb9599d225d6e4ca WatchSource:0}: Error finding container f1f35476a21abac78b8fc8131f04ba21909aa8a27296dceebb9599d225d6e4ca: Status 404 returned error can't find the container with id f1f35476a21abac78b8fc8131f04ba21909aa8a27296dceebb9599d225d6e4ca Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.276849 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-config\") pod \"d869e800-3cce-4e0d-b822-158858ee632b\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.276919 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-client-ca\") pod \"d869e800-3cce-4e0d-b822-158858ee632b\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.276961 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d869e800-3cce-4e0d-b822-158858ee632b-serving-cert\") pod \"d869e800-3cce-4e0d-b822-158858ee632b\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.276991 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bk5j\" (UniqueName: \"kubernetes.io/projected/d869e800-3cce-4e0d-b822-158858ee632b-kube-api-access-9bk5j\") pod \"d869e800-3cce-4e0d-b822-158858ee632b\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.277140 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.277187 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-proxy-ca-bundles\") pod \"d869e800-3cce-4e0d-b822-158858ee632b\" (UID: \"d869e800-3cce-4e0d-b822-158858ee632b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.279292 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d869e800-3cce-4e0d-b822-158858ee632b" (UID: "d869e800-3cce-4e0d-b822-158858ee632b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.280638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-config" (OuterVolumeSpecName: "config") pod "d869e800-3cce-4e0d-b822-158858ee632b" (UID: "d869e800-3cce-4e0d-b822-158858ee632b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.281167 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-client-ca" (OuterVolumeSpecName: "client-ca") pod "d869e800-3cce-4e0d-b822-158858ee632b" (UID: "d869e800-3cce-4e0d-b822-158858ee632b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.281276 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.78125674 +0000 UTC m=+258.376455009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.283798 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869e800-3cce-4e0d-b822-158858ee632b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d869e800-3cce-4e0d-b822-158858ee632b" (UID: "d869e800-3cce-4e0d-b822-158858ee632b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.285753 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869e800-3cce-4e0d-b822-158858ee632b-kube-api-access-9bk5j" (OuterVolumeSpecName: "kube-api-access-9bk5j") pod "d869e800-3cce-4e0d-b822-158858ee632b" (UID: "d869e800-3cce-4e0d-b822-158858ee632b"). InnerVolumeSpecName "kube-api-access-9bk5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.291089 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jbmnj"] Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.379163 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.379280 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.379290 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.379300 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d869e800-3cce-4e0d-b822-158858ee632b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.379310 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bk5j\" (UniqueName: \"kubernetes.io/projected/d869e800-3cce-4e0d-b822-158858ee632b-kube-api-access-9bk5j\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.379319 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d869e800-3cce-4e0d-b822-158858ee632b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.379651 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.879638545 +0000 UTC m=+258.474836814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.415265 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ctps"] Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.480589 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.481271 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:34.981254462 +0000 UTC m=+258.576452731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.566447 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7ztm"] Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.583438 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.583907 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.08388682 +0000 UTC m=+258.679085089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.590415 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q"] Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.590923 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a022e41-0a7d-4ab3-bcce-c404160d00c9" containerName="route-controller-manager" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.590943 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a022e41-0a7d-4ab3-bcce-c404160d00c9" containerName="route-controller-manager" Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.590981 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d869e800-3cce-4e0d-b822-158858ee632b" containerName="controller-manager" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.590991 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d869e800-3cce-4e0d-b822-158858ee632b" containerName="controller-manager" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.591148 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a022e41-0a7d-4ab3-bcce-c404160d00c9" containerName="route-controller-manager" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.591193 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d869e800-3cce-4e0d-b822-158858ee632b" containerName="controller-manager" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.591880 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.611658 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fd9d5c85f-slx26"] Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.628284 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q"] Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.628337 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fd9d5c85f-slx26"] Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.628480 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.686269 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.686780 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.186741045 +0000 UTC m=+258.781939314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.687412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-client-ca\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.687541 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-config\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.687633 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-proxy-ca-bundles\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.687744 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58q9v\" (UniqueName: \"kubernetes.io/projected/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-kube-api-access-58q9v\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.687947 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnc6g\" (UniqueName: \"kubernetes.io/projected/23812f0f-b691-4b96-b82f-52e533a5b5fd-kube-api-access-cnc6g\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.688041 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-serving-cert\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.688127 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23812f0f-b691-4b96-b82f-52e533a5b5fd-serving-cert\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.688201 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-client-ca\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.688298 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.688449 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-config\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.689413 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.189392279 +0000 UTC m=+258.784590548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.737219 4932 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.792455 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.792837 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnc6g\" (UniqueName: \"kubernetes.io/projected/23812f0f-b691-4b96-b82f-52e533a5b5fd-kube-api-access-cnc6g\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.792870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-serving-cert\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.792907 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23812f0f-b691-4b96-b82f-52e533a5b5fd-serving-cert\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.793152 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-client-ca\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.793222 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-config\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.793246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-client-ca\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.793282 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.293260836 +0000 UTC m=+258.888459105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.793325 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-config\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.793401 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-proxy-ca-bundles\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.793490 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58q9v\" (UniqueName: \"kubernetes.io/projected/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-kube-api-access-58q9v\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.794787 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-client-ca\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.794912 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-config\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.795012 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-client-ca\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.795623 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-config\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.798597 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-proxy-ca-bundles\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.802479 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-serving-cert\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.807310 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23812f0f-b691-4b96-b82f-52e533a5b5fd-serving-cert\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.818081 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58q9v\" (UniqueName: \"kubernetes.io/projected/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-kube-api-access-58q9v\") pod \"controller-manager-fd9d5c85f-slx26\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.822098 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnc6g\" (UniqueName: \"kubernetes.io/projected/23812f0f-b691-4b96-b82f-52e533a5b5fd-kube-api-access-cnc6g\") pod \"route-controller-manager-5d4784c968-gfj7q\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.835337 4932 generic.go:334] "Generic (PLEG): container finished" podID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerID="70cd0ecf33ed5758a71a6bd6c73c8db98391de066a2c5fb17780c0f460cffa68" exitCode=0 Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.835438 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbmnj" event={"ID":"bbc42726-34c5-4cd6-b2b4-5e27a325adbd","Type":"ContainerDied","Data":"70cd0ecf33ed5758a71a6bd6c73c8db98391de066a2c5fb17780c0f460cffa68"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.835481 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbmnj" event={"ID":"bbc42726-34c5-4cd6-b2b4-5e27a325adbd","Type":"ContainerStarted","Data":"b8e2dc1415db45e0c52fe0ae581c8c341df26eea316e3fa1d5e98ddd8627c0fb"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.840878 4932 generic.go:334] "Generic (PLEG): container finished" podID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerID="f9c6a60a9771d8f525b1289ebd3e0394ee86ce9389351f99465d03b3e6675b56" exitCode=0 Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.841835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx64c" event={"ID":"4b4f0982-25bd-4a81-b00f-7b35377a893a","Type":"ContainerDied","Data":"f9c6a60a9771d8f525b1289ebd3e0394ee86ce9389351f99465d03b3e6675b56"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.841865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx64c" event={"ID":"4b4f0982-25bd-4a81-b00f-7b35377a893a","Type":"ContainerStarted","Data":"f1f35476a21abac78b8fc8131f04ba21909aa8a27296dceebb9599d225d6e4ca"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.852536 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7ztm" event={"ID":"e5133ee9-0d6d-4533-81b4-9d8518eef6c6","Type":"ContainerStarted","Data":"aa126a7b04ebcd87808d9d1110c908b5135b27ed56555a3f74daa6e0076f9b18"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.858003 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.862460 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.862441 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s" event={"ID":"7a022e41-0a7d-4ab3-bcce-c404160d00c9","Type":"ContainerDied","Data":"f23b8a629caea107f5dbe1d7776fc0280dc5b0b83dec64ac2b627f33f0021d8f"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.862574 4932 scope.go:117] "RemoveContainer" containerID="8209e9cddd8cc4231930316e06f635c24d8af46405f07407f81968eb0deeea93" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.866228 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xlbr5" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.869005 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" event={"ID":"d869e800-3cce-4e0d-b822-158858ee632b","Type":"ContainerDied","Data":"0eb0e12e073e44053d982e4ba2958b14b4c5d1d615bbf1fc0f489e79e69d4af0"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.869126 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8ffkm" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.876861 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" event={"ID":"e61399d9-1d7d-4109-b634-3bbcdad81d2a","Type":"ContainerStarted","Data":"c4e56cede14e04de56dce54667ac30e4b659c6a3e7acb9fba8c30d3e0ed5867e"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.892740 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctps" event={"ID":"2b21532b-ba09-4bcb-b030-eaad36e4ba20","Type":"ContainerStarted","Data":"93627ee69d9d2cd39f33d73acaf0ad7b9dac0f3482b178d0fe2d397e01e89381"} Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.900068 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:34 crc kubenswrapper[4932]: E0321 09:02:34.903384 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.40334233 +0000 UTC m=+258.998559669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.978394 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m2kwc"] Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.979850 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.990329 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 21 09:02:34 crc kubenswrapper[4932]: I0321 09:02:34.990651 4932 scope.go:117] "RemoveContainer" containerID="4d534460872774905ff8ca893964eab4b655f60474cb295b884808dc476c3cfe" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.001608 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.002486 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-catalog-content\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.002550 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-utilities\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: E0321 09:02:35.002673 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.502637353 +0000 UTC m=+259.097835762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.002726 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6g8c\" (UniqueName: \"kubernetes.io/projected/29534428-e319-412a-a850-53b180783073-kube-api-access-l6g8c\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.002985 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:35 crc kubenswrapper[4932]: E0321 09:02:35.003676 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.503665556 +0000 UTC m=+259.098863825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.008766 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2kwc"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.011748 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.027865 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.034104 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:35 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:35 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:35 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.034368 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.046383 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6qm6c" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.080135 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8ffkm"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.080654 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8ffkm"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.103361 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.104185 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.104466 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-catalog-content\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.104509 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-utilities\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.104535 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6g8c\" (UniqueName: \"kubernetes.io/projected/29534428-e319-412a-a850-53b180783073-kube-api-access-l6g8c\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: E0321 09:02:35.104943 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.604927851 +0000 UTC m=+259.200126120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.106859 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-catalog-content\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.107433 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-utilities\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.109395 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lcz8s"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.143457 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6g8c\" (UniqueName: \"kubernetes.io/projected/29534428-e319-412a-a850-53b180783073-kube-api-access-l6g8c\") pod \"redhat-marketplace-m2kwc\" (UID: \"29534428-e319-412a-a850-53b180783073\") " pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.206531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:35 crc kubenswrapper[4932]: E0321 09:02:35.207999 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.707982083 +0000 UTC m=+259.303180352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kbfjw" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.303295 4932 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-21T09:02:34.737254555Z","Handler":null,"Name":""} Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.307528 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:35 crc kubenswrapper[4932]: E0321 09:02:35.307717 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 09:02:35.807685519 +0000 UTC m=+259.402883788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.308003 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.308070 4932 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.308110 4932 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.313549 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.314213 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.315446 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.355529 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kbfjw\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.370575 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.384205 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m89fk"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.388494 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.394326 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m89fk"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.411025 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.413050 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-catalog-content\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.413146 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtxnh\" (UniqueName: \"kubernetes.io/projected/3bdb8dcf-07f0-4961-a04a-01133cbe4788-kube-api-access-gtxnh\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.413247 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-utilities\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.438262 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.475533 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fd9d5c85f-slx26"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.517025 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-utilities\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.517167 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-catalog-content\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.517250 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtxnh\" (UniqueName: \"kubernetes.io/projected/3bdb8dcf-07f0-4961-a04a-01133cbe4788-kube-api-access-gtxnh\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.520000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-utilities\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.520153 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-catalog-content\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.575257 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtxnh\" (UniqueName: \"kubernetes.io/projected/3bdb8dcf-07f0-4961-a04a-01133cbe4788-kube-api-access-gtxnh\") pod \"redhat-marketplace-m89fk\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.756262 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a022e41-0a7d-4ab3-bcce-c404160d00c9" path="/var/lib/kubelet/pods/7a022e41-0a7d-4ab3-bcce-c404160d00c9/volumes" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.758271 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.765748 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d869e800-3cce-4e0d-b822-158858ee632b" path="/var/lib/kubelet/pods/d869e800-3cce-4e0d-b822-158858ee632b/volumes" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.768264 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.770468 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.882007 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kbfjw"] Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.917127 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" event={"ID":"e61399d9-1d7d-4109-b634-3bbcdad81d2a","Type":"ContainerStarted","Data":"86f172be4628faf67a20f8048076816d17e1cb8059dfc1f4b210a5419b93e07e"} Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.917201 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" event={"ID":"e61399d9-1d7d-4109-b634-3bbcdad81d2a","Type":"ContainerStarted","Data":"6d53a841deeecb659bdecd4bc4cb5aab9b6a31cead70af80f09e4aa1b7789323"} Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.921405 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerID="dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764" exitCode=0 Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.921494 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctps" event={"ID":"2b21532b-ba09-4bcb-b030-eaad36e4ba20","Type":"ContainerDied","Data":"dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764"} Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.929652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" event={"ID":"23812f0f-b691-4b96-b82f-52e533a5b5fd","Type":"ContainerStarted","Data":"5082c9427271b3830669c14b351d169d0c5976d043b6048d8d0e95b98fd12abf"} Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.943009 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-2g9xk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.943044 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-2g9xk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.943065 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2g9xk" podUID="4387c45a-d8f9-4478-b224-b4e656880aaf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.943114 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2g9xk" podUID="4387c45a-d8f9-4478-b224-b4e656880aaf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.945049 4932 generic.go:334] "Generic (PLEG): container finished" podID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerID="e1ddf6afd4b3dec42dc10db86900216ec2eca506c8f489392dfd2d9c37778b65" exitCode=0 Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.945247 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7ztm" event={"ID":"e5133ee9-0d6d-4533-81b4-9d8518eef6c6","Type":"ContainerDied","Data":"e1ddf6afd4b3dec42dc10db86900216ec2eca506c8f489392dfd2d9c37778b65"} Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.949642 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bd9lw" podStartSLOduration=12.949630774 podStartE2EDuration="12.949630774s" podCreationTimestamp="2026-03-21 09:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:35.948939842 +0000 UTC m=+259.544138111" watchObservedRunningTime="2026-03-21 09:02:35.949630774 +0000 UTC m=+259.544829043" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.972462 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.973528 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.982144 4932 patch_prober.go:28] interesting pod/console-f9d7485db-v7lxk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.982231 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-v7lxk" podUID="a5e1cc78-be1f-45a2-87b3-73c62790c894" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.984427 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" event={"ID":"d0dc50e6-012f-4c75-a5a4-3b244f3ad376","Type":"ContainerStarted","Data":"ba5fc27211107d194ace273e977d9d4babce102ec6cba32bb2e5391bae293456"} Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.984533 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" event={"ID":"d0dc50e6-012f-4c75-a5a4-3b244f3ad376","Type":"ContainerStarted","Data":"e7de44081df8f573ac3f6410e9afe0aa75962a029e36aaa43ca54115b6e1ffcc"} Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.985094 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.986648 4932 patch_prober.go:28] interesting pod/controller-manager-fd9d5c85f-slx26 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.986702 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" podUID="d0dc50e6-012f-4c75-a5a4-3b244f3ad376" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 21 09:02:35 crc kubenswrapper[4932]: I0321 09:02:35.996054 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ttjm"] Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.001101 4932 ???:1] "http: TLS handshake error from 192.168.126.11:34942: no serving certificate available for the kubelet" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.002795 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.006098 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2kwc"] Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.010891 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.011922 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.012582 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.015294 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.017466 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ttjm"] Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.018817 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.032831 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-catalog-content\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.033027 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.033123 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-utilities\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.033169 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9lc\" (UniqueName: \"kubernetes.io/projected/35a4e0fc-1b46-40ea-8c9f-f284960024e6-kube-api-access-dc9lc\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.033320 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.041062 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" podStartSLOduration=3.041038369 podStartE2EDuration="3.041038369s" podCreationTimestamp="2026-03-21 09:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:36.024780878 +0000 UTC m=+259.619979147" watchObservedRunningTime="2026-03-21 09:02:36.041038369 +0000 UTC m=+259.636236638" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.045604 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:36 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:36 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:36 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.045975 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.047031 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.086158 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bpdhb" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.135687 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.135740 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-utilities\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.135771 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc9lc\" (UniqueName: \"kubernetes.io/projected/35a4e0fc-1b46-40ea-8c9f-f284960024e6-kube-api-access-dc9lc\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.135836 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.135930 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-catalog-content\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.139913 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.140400 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-utilities\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.140471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-catalog-content\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.173844 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc9lc\" (UniqueName: \"kubernetes.io/projected/35a4e0fc-1b46-40ea-8c9f-f284960024e6-kube-api-access-dc9lc\") pod \"redhat-operators-4ttjm\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.174406 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.369796 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m89fk"] Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.403930 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8s775"] Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.405971 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.407791 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.408279 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s775"] Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.409082 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.446375 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-utilities\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.446464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-catalog-content\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.446480 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgmw\" (UniqueName: \"kubernetes.io/projected/b7cf94af-e246-4ff3-92c6-7e184228e57d-kube-api-access-5jgmw\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.547243 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-utilities\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.547339 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-catalog-content\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.547372 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jgmw\" (UniqueName: \"kubernetes.io/projected/b7cf94af-e246-4ff3-92c6-7e184228e57d-kube-api-access-5jgmw\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.548321 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-utilities\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.548871 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-catalog-content\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.573567 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jgmw\" (UniqueName: \"kubernetes.io/projected/b7cf94af-e246-4ff3-92c6-7e184228e57d-kube-api-access-5jgmw\") pod \"redhat-operators-8s775\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:36 crc kubenswrapper[4932]: I0321 09:02:36.833483 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.029573 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.033655 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:37 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:37 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:37 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.033720 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.035412 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" event={"ID":"32d20a8b-1cb1-4ca7-a47c-0b5325424e43","Type":"ContainerStarted","Data":"c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538"} Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.035450 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" event={"ID":"32d20a8b-1cb1-4ca7-a47c-0b5325424e43","Type":"ContainerStarted","Data":"04bc17828ed7c761cdd3606f5d6a4f65be267d870890f75af826bd6e9fa43405"} Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.037677 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.040556 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" event={"ID":"23812f0f-b691-4b96-b82f-52e533a5b5fd","Type":"ContainerStarted","Data":"e7a9e56b2791f633e42fddbfe1d2d8c7df15717779f7b2fb6cee1268bb7cdad9"} Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.041957 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.047947 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m89fk" event={"ID":"3bdb8dcf-07f0-4961-a04a-01133cbe4788","Type":"ContainerStarted","Data":"2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc"} Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.048010 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m89fk" event={"ID":"3bdb8dcf-07f0-4961-a04a-01133cbe4788","Type":"ContainerStarted","Data":"459b4a38f0ea48366d52cbde91ae38b3d5f6bfa5ee7d46611b4210c3edc77ce3"} Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.060215 4932 generic.go:334] "Generic (PLEG): container finished" podID="29534428-e319-412a-a850-53b180783073" containerID="673ea7c8d5c23fca7a6adfe55e67df0a191eda9427cd9902e6ec35fe81b61887" exitCode=0 Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.061887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2kwc" event={"ID":"29534428-e319-412a-a850-53b180783073","Type":"ContainerDied","Data":"673ea7c8d5c23fca7a6adfe55e67df0a191eda9427cd9902e6ec35fe81b61887"} Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.061971 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2kwc" event={"ID":"29534428-e319-412a-a850-53b180783073","Type":"ContainerStarted","Data":"bcd3217a2afd96a8c27fac9533c60ed930c650344de0b1e5c3eac14526124ee3"} Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.067964 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" podStartSLOduration=203.067937085 podStartE2EDuration="3m23.067937085s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:37.062205125 +0000 UTC m=+260.657403394" watchObservedRunningTime="2026-03-21 09:02:37.067937085 +0000 UTC m=+260.663135354" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.073079 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.078156 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.134997 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" podStartSLOduration=4.134976064 podStartE2EDuration="4.134976064s" podCreationTimestamp="2026-03-21 09:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:37.128568272 +0000 UTC m=+260.723766541" watchObservedRunningTime="2026-03-21 09:02:37.134976064 +0000 UTC m=+260.730174333" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.231136 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.261068 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 21 09:02:37 crc kubenswrapper[4932]: W0321 09:02:37.327936 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfe4610cd_eb13_4992_acdf_51f6e60c3fef.slice/crio-ae1ba0d21042f0fc3a20439b25f0b77ba9f5783688716ec71f5bf87a1f6a42b0 WatchSource:0}: Error finding container ae1ba0d21042f0fc3a20439b25f0b77ba9f5783688716ec71f5bf87a1f6a42b0: Status 404 returned error can't find the container with id ae1ba0d21042f0fc3a20439b25f0b77ba9f5783688716ec71f5bf87a1f6a42b0 Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.390278 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ttjm"] Mar 21 09:02:37 crc kubenswrapper[4932]: W0321 09:02:37.446158 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35a4e0fc_1b46_40ea_8c9f_f284960024e6.slice/crio-035796def1d6940e18c786cc979676eb42451c40236b0702cf6a6f2ef5de40b3 WatchSource:0}: Error finding container 035796def1d6940e18c786cc979676eb42451c40236b0702cf6a6f2ef5de40b3: Status 404 returned error can't find the container with id 035796def1d6940e18c786cc979676eb42451c40236b0702cf6a6f2ef5de40b3 Mar 21 09:02:37 crc kubenswrapper[4932]: I0321 09:02:37.650682 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s775"] Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.044780 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:38 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:38 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:38 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.045392 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.074773 4932 generic.go:334] "Generic (PLEG): container finished" podID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerID="2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc" exitCode=0 Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.074866 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m89fk" event={"ID":"3bdb8dcf-07f0-4961-a04a-01133cbe4788","Type":"ContainerDied","Data":"2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc"} Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.078045 4932 generic.go:334] "Generic (PLEG): container finished" podID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerID="faca934e3ca95df8da1445b427c6bf9bbb783c027cfcd3367f21b8f6159a71bc" exitCode=0 Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.078137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s775" event={"ID":"b7cf94af-e246-4ff3-92c6-7e184228e57d","Type":"ContainerDied","Data":"faca934e3ca95df8da1445b427c6bf9bbb783c027cfcd3367f21b8f6159a71bc"} Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.078177 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s775" event={"ID":"b7cf94af-e246-4ff3-92c6-7e184228e57d","Type":"ContainerStarted","Data":"28f769933eb58fa8fc85d3844197fd4be101cebc84bc30fc297195e2f489401c"} Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.081682 4932 generic.go:334] "Generic (PLEG): container finished" podID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerID="d54a7199ce36da5b3a41f8a1cf92bb3489516ac76f08279748f938cc1d840fc9" exitCode=0 Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.081728 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ttjm" event={"ID":"35a4e0fc-1b46-40ea-8c9f-f284960024e6","Type":"ContainerDied","Data":"d54a7199ce36da5b3a41f8a1cf92bb3489516ac76f08279748f938cc1d840fc9"} Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.081746 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ttjm" event={"ID":"35a4e0fc-1b46-40ea-8c9f-f284960024e6","Type":"ContainerStarted","Data":"035796def1d6940e18c786cc979676eb42451c40236b0702cf6a6f2ef5de40b3"} Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.096944 4932 generic.go:334] "Generic (PLEG): container finished" podID="8309fb7a-e364-4f9b-a723-a0d4926a0a51" containerID="024919ca6cc75d54fd07cac05a2e2869d6d1dd7301ca35177f4e73ea829a16c2" exitCode=0 Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.097048 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" event={"ID":"8309fb7a-e364-4f9b-a723-a0d4926a0a51","Type":"ContainerDied","Data":"024919ca6cc75d54fd07cac05a2e2869d6d1dd7301ca35177f4e73ea829a16c2"} Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.108603 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fe4610cd-eb13-4992-acdf-51f6e60c3fef","Type":"ContainerStarted","Data":"ae1ba0d21042f0fc3a20439b25f0b77ba9f5783688716ec71f5bf87a1f6a42b0"} Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.441690 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.443277 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.447999 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.449418 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.457465 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.608702 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379cf01f-5bee-4f49-853b-11f9ba7197d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"379cf01f-5bee-4f49-853b-11f9ba7197d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.608760 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379cf01f-5bee-4f49-853b-11f9ba7197d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"379cf01f-5bee-4f49-853b-11f9ba7197d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.711963 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379cf01f-5bee-4f49-853b-11f9ba7197d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"379cf01f-5bee-4f49-853b-11f9ba7197d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.712024 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379cf01f-5bee-4f49-853b-11f9ba7197d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"379cf01f-5bee-4f49-853b-11f9ba7197d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.712371 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379cf01f-5bee-4f49-853b-11f9ba7197d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"379cf01f-5bee-4f49-853b-11f9ba7197d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.758707 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379cf01f-5bee-4f49-853b-11f9ba7197d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"379cf01f-5bee-4f49-853b-11f9ba7197d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:38 crc kubenswrapper[4932]: I0321 09:02:38.772795 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.070238 4932 ???:1] "http: TLS handshake error from 192.168.126.11:34958: no serving certificate available for the kubelet" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.080694 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:39 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:39 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:39 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.080763 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.121752 4932 generic.go:334] "Generic (PLEG): container finished" podID="fe4610cd-eb13-4992-acdf-51f6e60c3fef" containerID="9fbddc35b8f89845381cb1f113c7a6eabfaebd316a1ce4689e0a31694973edb5" exitCode=0 Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.121836 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fe4610cd-eb13-4992-acdf-51f6e60c3fef","Type":"ContainerDied","Data":"9fbddc35b8f89845381cb1f113c7a6eabfaebd316a1ce4689e0a31694973edb5"} Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.236010 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 21 09:02:39 crc kubenswrapper[4932]: W0321 09:02:39.275568 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod379cf01f_5bee_4f49_853b_11f9ba7197d3.slice/crio-db9c0d288ed1d491bbe598b03974efd074250ee4a8f1c83d2b87736cc9b042f7 WatchSource:0}: Error finding container db9c0d288ed1d491bbe598b03974efd074250ee4a8f1c83d2b87736cc9b042f7: Status 404 returned error can't find the container with id db9c0d288ed1d491bbe598b03974efd074250ee4a8f1c83d2b87736cc9b042f7 Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.457341 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.535514 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/8309fb7a-e364-4f9b-a723-a0d4926a0a51-kube-api-access-wqqqf\") pod \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.535674 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume\") pod \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.535724 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume\") pod \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\" (UID: \"8309fb7a-e364-4f9b-a723-a0d4926a0a51\") " Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.537193 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume" (OuterVolumeSpecName: "config-volume") pod "8309fb7a-e364-4f9b-a723-a0d4926a0a51" (UID: "8309fb7a-e364-4f9b-a723-a0d4926a0a51"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.547889 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8309fb7a-e364-4f9b-a723-a0d4926a0a51" (UID: "8309fb7a-e364-4f9b-a723-a0d4926a0a51"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.551859 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8309fb7a-e364-4f9b-a723-a0d4926a0a51-kube-api-access-wqqqf" (OuterVolumeSpecName: "kube-api-access-wqqqf") pod "8309fb7a-e364-4f9b-a723-a0d4926a0a51" (UID: "8309fb7a-e364-4f9b-a723-a0d4926a0a51"). InnerVolumeSpecName "kube-api-access-wqqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.637103 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8309fb7a-e364-4f9b-a723-a0d4926a0a51-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.637146 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/8309fb7a-e364-4f9b-a723-a0d4926a0a51-kube-api-access-wqqqf\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:39 crc kubenswrapper[4932]: I0321 09:02:39.637156 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8309fb7a-e364-4f9b-a723-a0d4926a0a51-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.039789 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:40 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:40 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:40 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.040459 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.141381 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" event={"ID":"8309fb7a-e364-4f9b-a723-a0d4926a0a51","Type":"ContainerDied","Data":"04e9d788f6208c092d22d09404f8978b9ab47af9949bff9d41b3df5e7c1c2598"} Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.141456 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e9d788f6208c092d22d09404f8978b9ab47af9949bff9d41b3df5e7c1c2598" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.141529 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.171063 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"379cf01f-5bee-4f49-853b-11f9ba7197d3","Type":"ContainerStarted","Data":"5e0d9c2c0d7124204732bb3efb469d95b679b3451eec3be0b068406a8d453ec1"} Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.171134 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"379cf01f-5bee-4f49-853b-11f9ba7197d3","Type":"ContainerStarted","Data":"db9c0d288ed1d491bbe598b03974efd074250ee4a8f1c83d2b87736cc9b042f7"} Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.206599 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.206567612 podStartE2EDuration="2.206567612s" podCreationTimestamp="2026-03-21 09:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:02:40.189897578 +0000 UTC m=+263.785095847" watchObservedRunningTime="2026-03-21 09:02:40.206567612 +0000 UTC m=+263.801765881" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.722994 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.866693 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kubelet-dir\") pod \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\" (UID: \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\") " Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.866853 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fe4610cd-eb13-4992-acdf-51f6e60c3fef" (UID: "fe4610cd-eb13-4992-acdf-51f6e60c3fef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.867922 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kube-api-access\") pod \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\" (UID: \"fe4610cd-eb13-4992-acdf-51f6e60c3fef\") " Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.868755 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.877805 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fe4610cd-eb13-4992-acdf-51f6e60c3fef" (UID: "fe4610cd-eb13-4992-acdf-51f6e60c3fef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:02:40 crc kubenswrapper[4932]: I0321 09:02:40.970728 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4610cd-eb13-4992-acdf-51f6e60c3fef-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:41 crc kubenswrapper[4932]: I0321 09:02:41.034004 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:41 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:41 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:41 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:41 crc kubenswrapper[4932]: I0321 09:02:41.034094 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:41 crc kubenswrapper[4932]: I0321 09:02:41.161498 4932 ???:1] "http: TLS handshake error from 192.168.126.11:34962: no serving certificate available for the kubelet" Mar 21 09:02:41 crc kubenswrapper[4932]: I0321 09:02:41.203953 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fe4610cd-eb13-4992-acdf-51f6e60c3fef","Type":"ContainerDied","Data":"ae1ba0d21042f0fc3a20439b25f0b77ba9f5783688716ec71f5bf87a1f6a42b0"} Mar 21 09:02:41 crc kubenswrapper[4932]: I0321 09:02:41.204054 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae1ba0d21042f0fc3a20439b25f0b77ba9f5783688716ec71f5bf87a1f6a42b0" Mar 21 09:02:41 crc kubenswrapper[4932]: I0321 09:02:41.204144 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 09:02:41 crc kubenswrapper[4932]: I0321 09:02:41.211154 4932 generic.go:334] "Generic (PLEG): container finished" podID="379cf01f-5bee-4f49-853b-11f9ba7197d3" containerID="5e0d9c2c0d7124204732bb3efb469d95b679b3451eec3be0b068406a8d453ec1" exitCode=0 Mar 21 09:02:41 crc kubenswrapper[4932]: I0321 09:02:41.211204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"379cf01f-5bee-4f49-853b-11f9ba7197d3","Type":"ContainerDied","Data":"5e0d9c2c0d7124204732bb3efb469d95b679b3451eec3be0b068406a8d453ec1"} Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.035424 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:42 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:42 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:42 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.035558 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.233025 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-945r8" Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.627737 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.735781 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379cf01f-5bee-4f49-853b-11f9ba7197d3-kube-api-access\") pod \"379cf01f-5bee-4f49-853b-11f9ba7197d3\" (UID: \"379cf01f-5bee-4f49-853b-11f9ba7197d3\") " Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.737237 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379cf01f-5bee-4f49-853b-11f9ba7197d3-kubelet-dir\") pod \"379cf01f-5bee-4f49-853b-11f9ba7197d3\" (UID: \"379cf01f-5bee-4f49-853b-11f9ba7197d3\") " Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.737660 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/379cf01f-5bee-4f49-853b-11f9ba7197d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "379cf01f-5bee-4f49-853b-11f9ba7197d3" (UID: "379cf01f-5bee-4f49-853b-11f9ba7197d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.745391 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379cf01f-5bee-4f49-853b-11f9ba7197d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "379cf01f-5bee-4f49-853b-11f9ba7197d3" (UID: "379cf01f-5bee-4f49-853b-11f9ba7197d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.839309 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/379cf01f-5bee-4f49-853b-11f9ba7197d3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:42 crc kubenswrapper[4932]: I0321 09:02:42.839824 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/379cf01f-5bee-4f49-853b-11f9ba7197d3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:02:43 crc kubenswrapper[4932]: I0321 09:02:43.040685 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:43 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:43 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:43 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:43 crc kubenswrapper[4932]: I0321 09:02:43.040784 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:43 crc kubenswrapper[4932]: I0321 09:02:43.256752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"379cf01f-5bee-4f49-853b-11f9ba7197d3","Type":"ContainerDied","Data":"db9c0d288ed1d491bbe598b03974efd074250ee4a8f1c83d2b87736cc9b042f7"} Mar 21 09:02:43 crc kubenswrapper[4932]: I0321 09:02:43.256813 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db9c0d288ed1d491bbe598b03974efd074250ee4a8f1c83d2b87736cc9b042f7" Mar 21 09:02:43 crc kubenswrapper[4932]: I0321 09:02:43.257723 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 09:02:44 crc kubenswrapper[4932]: I0321 09:02:44.031723 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:44 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:44 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:44 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:44 crc kubenswrapper[4932]: I0321 09:02:44.031798 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:45 crc kubenswrapper[4932]: I0321 09:02:45.029864 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:45 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:45 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:45 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:45 crc kubenswrapper[4932]: I0321 09:02:45.029938 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:45 crc kubenswrapper[4932]: I0321 09:02:45.948714 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2g9xk" Mar 21 09:02:45 crc kubenswrapper[4932]: I0321 09:02:45.973817 4932 patch_prober.go:28] interesting pod/console-f9d7485db-v7lxk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Mar 21 09:02:45 crc kubenswrapper[4932]: I0321 09:02:45.973882 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-v7lxk" podUID="a5e1cc78-be1f-45a2-87b3-73c62790c894" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Mar 21 09:02:46 crc kubenswrapper[4932]: I0321 09:02:46.037377 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:46 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:46 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:46 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:46 crc kubenswrapper[4932]: I0321 09:02:46.037453 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:47 crc kubenswrapper[4932]: I0321 09:02:47.031615 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:47 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:47 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:47 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:47 crc kubenswrapper[4932]: I0321 09:02:47.031679 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:48 crc kubenswrapper[4932]: I0321 09:02:48.033813 4932 patch_prober.go:28] interesting pod/router-default-5444994796-z8gns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 09:02:48 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Mar 21 09:02:48 crc kubenswrapper[4932]: [+]process-running ok Mar 21 09:02:48 crc kubenswrapper[4932]: healthz check failed Mar 21 09:02:48 crc kubenswrapper[4932]: I0321 09:02:48.033893 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z8gns" podUID="b870d2fa-ac84-43c4-a552-73ab8a723e12" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 09:02:49 crc kubenswrapper[4932]: I0321 09:02:49.033105 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:49 crc kubenswrapper[4932]: I0321 09:02:49.036416 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-z8gns" Mar 21 09:02:51 crc kubenswrapper[4932]: I0321 09:02:51.931710 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fd9d5c85f-slx26"] Mar 21 09:02:51 crc kubenswrapper[4932]: I0321 09:02:51.932235 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" podUID="d0dc50e6-012f-4c75-a5a4-3b244f3ad376" containerName="controller-manager" containerID="cri-o://ba5fc27211107d194ace273e977d9d4babce102ec6cba32bb2e5391bae293456" gracePeriod=30 Mar 21 09:02:51 crc kubenswrapper[4932]: I0321 09:02:51.943232 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q"] Mar 21 09:02:51 crc kubenswrapper[4932]: I0321 09:02:51.943582 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" podUID="23812f0f-b691-4b96-b82f-52e533a5b5fd" containerName="route-controller-manager" containerID="cri-o://e7a9e56b2791f633e42fddbfe1d2d8c7df15717779f7b2fb6cee1268bb7cdad9" gracePeriod=30 Mar 21 09:02:52 crc kubenswrapper[4932]: I0321 09:02:52.359287 4932 generic.go:334] "Generic (PLEG): container finished" podID="23812f0f-b691-4b96-b82f-52e533a5b5fd" containerID="e7a9e56b2791f633e42fddbfe1d2d8c7df15717779f7b2fb6cee1268bb7cdad9" exitCode=0 Mar 21 09:02:52 crc kubenswrapper[4932]: I0321 09:02:52.359587 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" event={"ID":"23812f0f-b691-4b96-b82f-52e533a5b5fd","Type":"ContainerDied","Data":"e7a9e56b2791f633e42fddbfe1d2d8c7df15717779f7b2fb6cee1268bb7cdad9"} Mar 21 09:02:53 crc kubenswrapper[4932]: I0321 09:02:53.366882 4932 generic.go:334] "Generic (PLEG): container finished" podID="d0dc50e6-012f-4c75-a5a4-3b244f3ad376" containerID="ba5fc27211107d194ace273e977d9d4babce102ec6cba32bb2e5391bae293456" exitCode=0 Mar 21 09:02:53 crc kubenswrapper[4932]: I0321 09:02:53.366934 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" event={"ID":"d0dc50e6-012f-4c75-a5a4-3b244f3ad376","Type":"ContainerDied","Data":"ba5fc27211107d194ace273e977d9d4babce102ec6cba32bb2e5391bae293456"} Mar 21 09:02:55 crc kubenswrapper[4932]: I0321 09:02:55.013264 4932 patch_prober.go:28] interesting pod/route-controller-manager-5d4784c968-gfj7q container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Mar 21 09:02:55 crc kubenswrapper[4932]: I0321 09:02:55.013415 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" podUID="23812f0f-b691-4b96-b82f-52e533a5b5fd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Mar 21 09:02:55 crc kubenswrapper[4932]: I0321 09:02:55.029648 4932 patch_prober.go:28] interesting pod/controller-manager-fd9d5c85f-slx26 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 21 09:02:55 crc kubenswrapper[4932]: I0321 09:02:55.029723 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" podUID="d0dc50e6-012f-4c75-a5a4-3b244f3ad376" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 21 09:02:55 crc kubenswrapper[4932]: I0321 09:02:55.376494 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:02:56 crc kubenswrapper[4932]: I0321 09:02:56.170399 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:02:56 crc kubenswrapper[4932]: I0321 09:02:56.175026 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:03:00 crc kubenswrapper[4932]: I0321 09:03:00.225500 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:03:00 crc kubenswrapper[4932]: I0321 09:03:00.226264 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:03:01 crc kubenswrapper[4932]: I0321 09:03:01.668809 4932 ???:1] "http: TLS handshake error from 192.168.126.11:60682: no serving certificate available for the kubelet" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.210232 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.211913 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:03:03 crc kubenswrapper[4932]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 21 09:03:03 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vh9v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29568062-ckbkj_openshift-infra(026bb1a2-7881-45a8-8845-53d8bbcb4166): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 21 09:03:03 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.213715 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" podUID="026bb1a2-7881-45a8-8845-53d8bbcb4166" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.226598 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.231287 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.231480 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 21 09:03:03 crc kubenswrapper[4932]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 21 09:03:03 crc kubenswrapper[4932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jc6tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29568060-6lptj_openshift-infra(64d9430a-2f41-4dac-bfa7-9fa47a85db9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 21 09:03:03 crc kubenswrapper[4932]: > logger="UnhandledError" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.232669 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29568060-6lptj" podUID="64d9430a-2f41-4dac-bfa7-9fa47a85db9a" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.237209 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.260696 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp"] Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.261027 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4610cd-eb13-4992-acdf-51f6e60c3fef" containerName="pruner" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261048 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4610cd-eb13-4992-acdf-51f6e60c3fef" containerName="pruner" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.261062 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8309fb7a-e364-4f9b-a723-a0d4926a0a51" containerName="collect-profiles" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261071 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8309fb7a-e364-4f9b-a723-a0d4926a0a51" containerName="collect-profiles" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.261085 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379cf01f-5bee-4f49-853b-11f9ba7197d3" containerName="pruner" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261094 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="379cf01f-5bee-4f49-853b-11f9ba7197d3" containerName="pruner" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.261108 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0dc50e6-012f-4c75-a5a4-3b244f3ad376" containerName="controller-manager" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261115 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0dc50e6-012f-4c75-a5a4-3b244f3ad376" containerName="controller-manager" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.261126 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23812f0f-b691-4b96-b82f-52e533a5b5fd" containerName="route-controller-manager" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261133 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="23812f0f-b691-4b96-b82f-52e533a5b5fd" containerName="route-controller-manager" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261268 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="23812f0f-b691-4b96-b82f-52e533a5b5fd" containerName="route-controller-manager" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261280 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="379cf01f-5bee-4f49-853b-11f9ba7197d3" containerName="pruner" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261293 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0dc50e6-012f-4c75-a5a4-3b244f3ad376" containerName="controller-manager" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261305 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8309fb7a-e364-4f9b-a723-a0d4926a0a51" containerName="collect-profiles" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261319 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4610cd-eb13-4992-acdf-51f6e60c3fef" containerName="pruner" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.261886 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.271314 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp"] Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.318890 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23812f0f-b691-4b96-b82f-52e533a5b5fd-serving-cert\") pod \"23812f0f-b691-4b96-b82f-52e533a5b5fd\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.318952 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnc6g\" (UniqueName: \"kubernetes.io/projected/23812f0f-b691-4b96-b82f-52e533a5b5fd-kube-api-access-cnc6g\") pod \"23812f0f-b691-4b96-b82f-52e533a5b5fd\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.318995 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-proxy-ca-bundles\") pod \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319141 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-client-ca\") pod \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319169 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-config\") pod \"23812f0f-b691-4b96-b82f-52e533a5b5fd\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319210 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-client-ca\") pod \"23812f0f-b691-4b96-b82f-52e533a5b5fd\" (UID: \"23812f0f-b691-4b96-b82f-52e533a5b5fd\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319229 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-serving-cert\") pod \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319282 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-config\") pod \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319400 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58q9v\" (UniqueName: \"kubernetes.io/projected/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-kube-api-access-58q9v\") pod \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\" (UID: \"d0dc50e6-012f-4c75-a5a4-3b244f3ad376\") " Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319619 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-client-ca\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319648 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-serving-cert\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.319707 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-config\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.320255 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgnv\" (UniqueName: \"kubernetes.io/projected/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-kube-api-access-vkgnv\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.321631 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-config" (OuterVolumeSpecName: "config") pod "d0dc50e6-012f-4c75-a5a4-3b244f3ad376" (UID: "d0dc50e6-012f-4c75-a5a4-3b244f3ad376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.321972 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "23812f0f-b691-4b96-b82f-52e533a5b5fd" (UID: "23812f0f-b691-4b96-b82f-52e533a5b5fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.322048 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d0dc50e6-012f-4c75-a5a4-3b244f3ad376" (UID: "d0dc50e6-012f-4c75-a5a4-3b244f3ad376"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.322559 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-client-ca" (OuterVolumeSpecName: "client-ca") pod "d0dc50e6-012f-4c75-a5a4-3b244f3ad376" (UID: "d0dc50e6-012f-4c75-a5a4-3b244f3ad376"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.323094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-config" (OuterVolumeSpecName: "config") pod "23812f0f-b691-4b96-b82f-52e533a5b5fd" (UID: "23812f0f-b691-4b96-b82f-52e533a5b5fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.340177 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-kube-api-access-58q9v" (OuterVolumeSpecName: "kube-api-access-58q9v") pod "d0dc50e6-012f-4c75-a5a4-3b244f3ad376" (UID: "d0dc50e6-012f-4c75-a5a4-3b244f3ad376"). InnerVolumeSpecName "kube-api-access-58q9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.340487 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23812f0f-b691-4b96-b82f-52e533a5b5fd-kube-api-access-cnc6g" (OuterVolumeSpecName: "kube-api-access-cnc6g") pod "23812f0f-b691-4b96-b82f-52e533a5b5fd" (UID: "23812f0f-b691-4b96-b82f-52e533a5b5fd"). InnerVolumeSpecName "kube-api-access-cnc6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.341079 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23812f0f-b691-4b96-b82f-52e533a5b5fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23812f0f-b691-4b96-b82f-52e533a5b5fd" (UID: "23812f0f-b691-4b96-b82f-52e533a5b5fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.349301 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d0dc50e6-012f-4c75-a5a4-3b244f3ad376" (UID: "d0dc50e6-012f-4c75-a5a4-3b244f3ad376"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.421873 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgnv\" (UniqueName: \"kubernetes.io/projected/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-kube-api-access-vkgnv\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.424753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-client-ca\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.426598 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" event={"ID":"d0dc50e6-012f-4c75-a5a4-3b244f3ad376","Type":"ContainerDied","Data":"e7de44081df8f573ac3f6410e9afe0aa75962a029e36aaa43ca54115b6e1ffcc"} Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.426797 4932 scope.go:117] "RemoveContainer" containerID="ba5fc27211107d194ace273e977d9d4babce102ec6cba32bb2e5391bae293456" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.426620 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd9d5c85f-slx26" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.426630 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-serving-cert\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427118 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-config\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427264 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427382 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58q9v\" (UniqueName: \"kubernetes.io/projected/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-kube-api-access-58q9v\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427410 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23812f0f-b691-4b96-b82f-52e533a5b5fd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427426 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnc6g\" (UniqueName: \"kubernetes.io/projected/23812f0f-b691-4b96-b82f-52e533a5b5fd-kube-api-access-cnc6g\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427440 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427454 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427468 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427480 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23812f0f-b691-4b96-b82f-52e533a5b5fd-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.427494 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0dc50e6-012f-4c75-a5a4-3b244f3ad376-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.428704 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-config\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.429905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" event={"ID":"23812f0f-b691-4b96-b82f-52e533a5b5fd","Type":"ContainerDied","Data":"5082c9427271b3830669c14b351d169d0c5976d043b6048d8d0e95b98fd12abf"} Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.429978 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.431293 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29568060-6lptj" podUID="64d9430a-2f41-4dac-bfa7-9fa47a85db9a" Mar 21 09:03:03 crc kubenswrapper[4932]: E0321 09:03:03.431521 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" podUID="026bb1a2-7881-45a8-8845-53d8bbcb4166" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.433501 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-client-ca\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.435801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-serving-cert\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.443454 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgnv\" (UniqueName: \"kubernetes.io/projected/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-kube-api-access-vkgnv\") pod \"route-controller-manager-ccb47d448-pcbfp\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.485057 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q"] Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.495265 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d4784c968-gfj7q"] Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.500514 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fd9d5c85f-slx26"] Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.504606 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-fd9d5c85f-slx26"] Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.605327 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.711964 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23812f0f-b691-4b96-b82f-52e533a5b5fd" path="/var/lib/kubelet/pods/23812f0f-b691-4b96-b82f-52e533a5b5fd/volumes" Mar 21 09:03:03 crc kubenswrapper[4932]: I0321 09:03:03.712700 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0dc50e6-012f-4c75-a5a4-3b244f3ad376" path="/var/lib/kubelet/pods/d0dc50e6-012f-4c75-a5a4-3b244f3ad376/volumes" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.603561 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c67d7b445-nsvlz"] Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.606088 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.609608 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c67d7b445-nsvlz"] Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.612455 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.620908 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.621621 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.622514 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.626600 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.626800 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.627397 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.662472 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvl4c\" (UniqueName: \"kubernetes.io/projected/fc773320-dc83-4bb5-8f46-2f0be807266a-kube-api-access-rvl4c\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.662575 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-config\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.662616 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-client-ca\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.662802 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc773320-dc83-4bb5-8f46-2f0be807266a-serving-cert\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.662839 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-proxy-ca-bundles\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.764332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvl4c\" (UniqueName: \"kubernetes.io/projected/fc773320-dc83-4bb5-8f46-2f0be807266a-kube-api-access-rvl4c\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.764427 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-config\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.764466 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-client-ca\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.764613 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc773320-dc83-4bb5-8f46-2f0be807266a-serving-cert\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.764635 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-proxy-ca-bundles\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.765960 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-config\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.767876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-client-ca\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.768302 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-proxy-ca-bundles\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.772544 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc773320-dc83-4bb5-8f46-2f0be807266a-serving-cert\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.783305 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvl4c\" (UniqueName: \"kubernetes.io/projected/fc773320-dc83-4bb5-8f46-2f0be807266a-kube-api-access-rvl4c\") pod \"controller-manager-7c67d7b445-nsvlz\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:05 crc kubenswrapper[4932]: I0321 09:03:05.945885 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:06 crc kubenswrapper[4932]: I0321 09:03:06.245393 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w2z2m" Mar 21 09:03:11 crc kubenswrapper[4932]: E0321 09:03:11.208120 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 21 09:03:11 crc kubenswrapper[4932]: E0321 09:03:11.209248 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jgmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8s775_openshift-marketplace(b7cf94af-e246-4ff3-92c6-7e184228e57d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 09:03:11 crc kubenswrapper[4932]: E0321 09:03:11.211248 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8s775" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" Mar 21 09:03:11 crc kubenswrapper[4932]: E0321 09:03:11.217231 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 21 09:03:11 crc kubenswrapper[4932]: E0321 09:03:11.217505 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dc9lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4ttjm_openshift-marketplace(35a4e0fc-1b46-40ea-8c9f-f284960024e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 09:03:11 crc kubenswrapper[4932]: E0321 09:03:11.218698 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4ttjm" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.225638 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.226647 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.231623 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.231876 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.258641 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.358453 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.358537 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.464473 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.464580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.464680 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.488238 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.564254 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:11 crc kubenswrapper[4932]: I0321 09:03:11.916793 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c67d7b445-nsvlz"] Mar 21 09:03:12 crc kubenswrapper[4932]: I0321 09:03:12.021285 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp"] Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.116241 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8s775" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.116395 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4ttjm" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" Mar 21 09:03:13 crc kubenswrapper[4932]: I0321 09:03:13.170625 4932 scope.go:117] "RemoveContainer" containerID="e7a9e56b2791f633e42fddbfe1d2d8c7df15717779f7b2fb6cee1268bb7cdad9" Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.221800 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.221951 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2lqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-n7ztm_openshift-marketplace(e5133ee9-0d6d-4533-81b4-9d8518eef6c6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.223246 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-n7ztm" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.240490 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.240677 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8hhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gx64c_openshift-marketplace(4b4f0982-25bd-4a81-b00f-7b35377a893a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.242245 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gx64c" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" Mar 21 09:03:13 crc kubenswrapper[4932]: I0321 09:03:13.452866 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp"] Mar 21 09:03:13 crc kubenswrapper[4932]: I0321 09:03:13.495795 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctps" event={"ID":"2b21532b-ba09-4bcb-b030-eaad36e4ba20","Type":"ContainerStarted","Data":"dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305"} Mar 21 09:03:13 crc kubenswrapper[4932]: I0321 09:03:13.502553 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbmnj" event={"ID":"bbc42726-34c5-4cd6-b2b4-5e27a325adbd","Type":"ContainerStarted","Data":"1d2f39a26ed17b767a1104da056fa93420bded8565e5f8c1487690df48733756"} Mar 21 09:03:13 crc kubenswrapper[4932]: I0321 09:03:13.510565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" event={"ID":"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7","Type":"ContainerStarted","Data":"209178244c07e04f7124dc1e2ec5b7892c4c409b96a98cc97afbe533e97f47f8"} Mar 21 09:03:13 crc kubenswrapper[4932]: I0321 09:03:13.514256 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m89fk" event={"ID":"3bdb8dcf-07f0-4961-a04a-01133cbe4788","Type":"ContainerStarted","Data":"6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8"} Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.578382 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gx64c" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" Mar 21 09:03:13 crc kubenswrapper[4932]: E0321 09:03:13.578695 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-n7ztm" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" Mar 21 09:03:13 crc kubenswrapper[4932]: I0321 09:03:13.780712 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c67d7b445-nsvlz"] Mar 21 09:03:13 crc kubenswrapper[4932]: I0321 09:03:13.812634 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.573793 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" event={"ID":"fc773320-dc83-4bb5-8f46-2f0be807266a","Type":"ContainerStarted","Data":"67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.574238 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.573985 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" podUID="fc773320-dc83-4bb5-8f46-2f0be807266a" containerName="controller-manager" containerID="cri-o://67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27" gracePeriod=30 Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.574258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" event={"ID":"fc773320-dc83-4bb5-8f46-2f0be807266a","Type":"ContainerStarted","Data":"3237014b5ecc7ffb083ec7e9ede49055ae2413223c819444700570a1495ff88d"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.576514 4932 generic.go:334] "Generic (PLEG): container finished" podID="29534428-e319-412a-a850-53b180783073" containerID="c55265913a2b9bfaa4d3b2acfc53f91ffe8a02e8c3a552f85415b3af92820bdc" exitCode=0 Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.576597 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2kwc" event={"ID":"29534428-e319-412a-a850-53b180783073","Type":"ContainerDied","Data":"c55265913a2b9bfaa4d3b2acfc53f91ffe8a02e8c3a552f85415b3af92820bdc"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.582572 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8","Type":"ContainerStarted","Data":"a0cfb8c4fb85e256151740b57a8360856a79379bbb3b3860ba09986a061624f2"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.582635 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8","Type":"ContainerStarted","Data":"81b22c5b6800e6e36847758b45bcf5defeebd79b952c691851fc3aa6a7d82f6b"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.586233 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerID="dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305" exitCode=0 Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.586365 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctps" event={"ID":"2b21532b-ba09-4bcb-b030-eaad36e4ba20","Type":"ContainerDied","Data":"dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.587566 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.590121 4932 generic.go:334] "Generic (PLEG): container finished" podID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerID="1d2f39a26ed17b767a1104da056fa93420bded8565e5f8c1487690df48733756" exitCode=0 Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.590189 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbmnj" event={"ID":"bbc42726-34c5-4cd6-b2b4-5e27a325adbd","Type":"ContainerDied","Data":"1d2f39a26ed17b767a1104da056fa93420bded8565e5f8c1487690df48733756"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.592307 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" event={"ID":"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7","Type":"ContainerStarted","Data":"3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.592465 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" podUID="3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" containerName="route-controller-manager" containerID="cri-o://3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc" gracePeriod=30 Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.592879 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.598946 4932 generic.go:334] "Generic (PLEG): container finished" podID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerID="6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8" exitCode=0 Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.599016 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m89fk" event={"ID":"3bdb8dcf-07f0-4961-a04a-01133cbe4788","Type":"ContainerDied","Data":"6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8"} Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.602475 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" podStartSLOduration=23.602455527 podStartE2EDuration="23.602455527s" podCreationTimestamp="2026-03-21 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:14.599520894 +0000 UTC m=+298.194719173" watchObservedRunningTime="2026-03-21 09:03:14.602455527 +0000 UTC m=+298.197653796" Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.618691 4932 patch_prober.go:28] interesting pod/route-controller-manager-ccb47d448-pcbfp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:59574->10.217.0.58:8443: read: connection reset by peer" start-of-body= Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.618765 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" podUID="3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:59574->10.217.0.58:8443: read: connection reset by peer" Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.688269 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.688243356 podStartE2EDuration="3.688243356s" podCreationTimestamp="2026-03-21 09:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:14.665541422 +0000 UTC m=+298.260739691" watchObservedRunningTime="2026-03-21 09:03:14.688243356 +0000 UTC m=+298.283441625" Mar 21 09:03:14 crc kubenswrapper[4932]: I0321 09:03:14.757335 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" podStartSLOduration=23.757308919 podStartE2EDuration="23.757308919s" podCreationTimestamp="2026-03-21 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:14.752821078 +0000 UTC m=+298.348019347" watchObservedRunningTime="2026-03-21 09:03:14.757308919 +0000 UTC m=+298.352507188" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.008752 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.043296 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67459cd59b-cbrpp"] Mar 21 09:03:15 crc kubenswrapper[4932]: E0321 09:03:15.050842 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc773320-dc83-4bb5-8f46-2f0be807266a" containerName="controller-manager" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.050873 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc773320-dc83-4bb5-8f46-2f0be807266a" containerName="controller-manager" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.051013 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc773320-dc83-4bb5-8f46-2f0be807266a" containerName="controller-manager" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.051675 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.053389 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67459cd59b-cbrpp"] Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.059938 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-ccb47d448-pcbfp_3b917379-75a2-4eb4-a5e0-c3afbd02b4c7/route-controller-manager/0.log" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.060004 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.152315 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-proxy-ca-bundles\") pod \"fc773320-dc83-4bb5-8f46-2f0be807266a\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153049 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc773320-dc83-4bb5-8f46-2f0be807266a-serving-cert\") pod \"fc773320-dc83-4bb5-8f46-2f0be807266a\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153161 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-client-ca\") pod \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153257 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvl4c\" (UniqueName: \"kubernetes.io/projected/fc773320-dc83-4bb5-8f46-2f0be807266a-kube-api-access-rvl4c\") pod \"fc773320-dc83-4bb5-8f46-2f0be807266a\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153360 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-config\") pod \"fc773320-dc83-4bb5-8f46-2f0be807266a\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153453 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-client-ca\") pod \"fc773320-dc83-4bb5-8f46-2f0be807266a\" (UID: \"fc773320-dc83-4bb5-8f46-2f0be807266a\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153550 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkgnv\" (UniqueName: \"kubernetes.io/projected/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-kube-api-access-vkgnv\") pod \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153691 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-serving-cert\") pod \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153771 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-config\") pod \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\" (UID: \"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7\") " Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154077 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-serving-cert\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154188 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-proxy-ca-bundles\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154294 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-config\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154083 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc773320-dc83-4bb5-8f46-2f0be807266a" (UID: "fc773320-dc83-4bb5-8f46-2f0be807266a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154140 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-config" (OuterVolumeSpecName: "config") pod "fc773320-dc83-4bb5-8f46-2f0be807266a" (UID: "fc773320-dc83-4bb5-8f46-2f0be807266a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.153603 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fc773320-dc83-4bb5-8f46-2f0be807266a" (UID: "fc773320-dc83-4bb5-8f46-2f0be807266a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxk6\" (UniqueName: \"kubernetes.io/projected/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-kube-api-access-rnxk6\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154545 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-client-ca" (OuterVolumeSpecName: "client-ca") pod "3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" (UID: "3b917379-75a2-4eb4-a5e0-c3afbd02b4c7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154668 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-config" (OuterVolumeSpecName: "config") pod "3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" (UID: "3b917379-75a2-4eb4-a5e0-c3afbd02b4c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.154814 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-client-ca\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.155025 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.155046 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.155057 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.155067 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.155078 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc773320-dc83-4bb5-8f46-2f0be807266a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.160500 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" (UID: "3b917379-75a2-4eb4-a5e0-c3afbd02b4c7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.164766 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc773320-dc83-4bb5-8f46-2f0be807266a-kube-api-access-rvl4c" (OuterVolumeSpecName: "kube-api-access-rvl4c") pod "fc773320-dc83-4bb5-8f46-2f0be807266a" (UID: "fc773320-dc83-4bb5-8f46-2f0be807266a"). InnerVolumeSpecName "kube-api-access-rvl4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.165567 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc773320-dc83-4bb5-8f46-2f0be807266a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc773320-dc83-4bb5-8f46-2f0be807266a" (UID: "fc773320-dc83-4bb5-8f46-2f0be807266a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.165605 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-kube-api-access-vkgnv" (OuterVolumeSpecName: "kube-api-access-vkgnv") pod "3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" (UID: "3b917379-75a2-4eb4-a5e0-c3afbd02b4c7"). InnerVolumeSpecName "kube-api-access-vkgnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256120 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-client-ca\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256218 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-serving-cert\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256238 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-proxy-ca-bundles\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256280 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-config\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256304 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxk6\" (UniqueName: \"kubernetes.io/projected/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-kube-api-access-rnxk6\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256435 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc773320-dc83-4bb5-8f46-2f0be807266a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256449 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvl4c\" (UniqueName: \"kubernetes.io/projected/fc773320-dc83-4bb5-8f46-2f0be807266a-kube-api-access-rvl4c\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256461 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkgnv\" (UniqueName: \"kubernetes.io/projected/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-kube-api-access-vkgnv\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.256672 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.258121 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-client-ca\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.258547 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-config\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.258678 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-proxy-ca-bundles\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.260514 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-serving-cert\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.278999 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxk6\" (UniqueName: \"kubernetes.io/projected/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-kube-api-access-rnxk6\") pod \"controller-manager-67459cd59b-cbrpp\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.376292 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.612176 4932 generic.go:334] "Generic (PLEG): container finished" podID="fc773320-dc83-4bb5-8f46-2f0be807266a" containerID="67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27" exitCode=0 Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.612896 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.612898 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" event={"ID":"fc773320-dc83-4bb5-8f46-2f0be807266a","Type":"ContainerDied","Data":"67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27"} Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.612960 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c67d7b445-nsvlz" event={"ID":"fc773320-dc83-4bb5-8f46-2f0be807266a","Type":"ContainerDied","Data":"3237014b5ecc7ffb083ec7e9ede49055ae2413223c819444700570a1495ff88d"} Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.612998 4932 scope.go:117] "RemoveContainer" containerID="67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.616227 4932 generic.go:334] "Generic (PLEG): container finished" podID="f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8" containerID="a0cfb8c4fb85e256151740b57a8360856a79379bbb3b3860ba09986a061624f2" exitCode=0 Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.616331 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8","Type":"ContainerDied","Data":"a0cfb8c4fb85e256151740b57a8360856a79379bbb3b3860ba09986a061624f2"} Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.618210 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-ccb47d448-pcbfp_3b917379-75a2-4eb4-a5e0-c3afbd02b4c7/route-controller-manager/0.log" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.618244 4932 generic.go:334] "Generic (PLEG): container finished" podID="3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" containerID="3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc" exitCode=255 Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.618280 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" event={"ID":"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7","Type":"ContainerDied","Data":"3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc"} Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.618301 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" event={"ID":"3b917379-75a2-4eb4-a5e0-c3afbd02b4c7","Type":"ContainerDied","Data":"209178244c07e04f7124dc1e2ec5b7892c4c409b96a98cc97afbe533e97f47f8"} Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.618571 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.640206 4932 scope.go:117] "RemoveContainer" containerID="67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27" Mar 21 09:03:15 crc kubenswrapper[4932]: E0321 09:03:15.640980 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27\": container with ID starting with 67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27 not found: ID does not exist" containerID="67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.641074 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27"} err="failed to get container status \"67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27\": rpc error: code = NotFound desc = could not find container \"67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27\": container with ID starting with 67d132e128d489fe35596849df9f40be216f54aa8b73cdc3d2602eb2e0966a27 not found: ID does not exist" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.641117 4932 scope.go:117] "RemoveContainer" containerID="3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.667618 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c67d7b445-nsvlz"] Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.671009 4932 scope.go:117] "RemoveContainer" containerID="3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc" Mar 21 09:03:15 crc kubenswrapper[4932]: E0321 09:03:15.672823 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc\": container with ID starting with 3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc not found: ID does not exist" containerID="3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.672919 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc"} err="failed to get container status \"3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc\": rpc error: code = NotFound desc = could not find container \"3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc\": container with ID starting with 3245d33ed7ab96f77749492413d459c60eb4bca486cfd2433807d0c32b0f3cfc not found: ID does not exist" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.676263 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c67d7b445-nsvlz"] Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.679239 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp"] Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.681718 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ccb47d448-pcbfp"] Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.713844 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" path="/var/lib/kubelet/pods/3b917379-75a2-4eb4-a5e0-c3afbd02b4c7/volumes" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.714606 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc773320-dc83-4bb5-8f46-2f0be807266a" path="/var/lib/kubelet/pods/fc773320-dc83-4bb5-8f46-2f0be807266a/volumes" Mar 21 09:03:15 crc kubenswrapper[4932]: I0321 09:03:15.770757 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67459cd59b-cbrpp"] Mar 21 09:03:15 crc kubenswrapper[4932]: W0321 09:03:15.774019 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d63bfb9_aef1_4316_80a6_74e5f5cab65a.slice/crio-a4e1b50c5c1d55478cc42d4c17a1b7f23610aaa1a1c3ff5519c670f9cbc1ae21 WatchSource:0}: Error finding container a4e1b50c5c1d55478cc42d4c17a1b7f23610aaa1a1c3ff5519c670f9cbc1ae21: Status 404 returned error can't find the container with id a4e1b50c5c1d55478cc42d4c17a1b7f23610aaa1a1c3ff5519c670f9cbc1ae21 Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.419106 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 21 09:03:16 crc kubenswrapper[4932]: E0321 09:03:16.419813 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" containerName="route-controller-manager" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.419829 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" containerName="route-controller-manager" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.419973 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b917379-75a2-4eb4-a5e0-c3afbd02b4c7" containerName="route-controller-manager" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.420538 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.439322 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.576013 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kubelet-dir\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.576233 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kube-api-access\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.576334 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-var-lock\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.627691 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" event={"ID":"0d63bfb9-aef1-4316-80a6-74e5f5cab65a","Type":"ContainerStarted","Data":"3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1"} Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.627761 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" event={"ID":"0d63bfb9-aef1-4316-80a6-74e5f5cab65a","Type":"ContainerStarted","Data":"a4e1b50c5c1d55478cc42d4c17a1b7f23610aaa1a1c3ff5519c670f9cbc1ae21"} Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.627989 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.635666 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.649339 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" podStartSLOduration=5.649311298 podStartE2EDuration="5.649311298s" podCreationTimestamp="2026-03-21 09:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:16.647900424 +0000 UTC m=+300.243098713" watchObservedRunningTime="2026-03-21 09:03:16.649311298 +0000 UTC m=+300.244509567" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.677586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kubelet-dir\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.677701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kube-api-access\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.677757 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-var-lock\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.677749 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kubelet-dir\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.677942 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-var-lock\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.719656 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kube-api-access\") pod \"installer-9-crc\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:16 crc kubenswrapper[4932]: I0321 09:03:16.743299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.023693 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.600917 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh"] Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.604822 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.609969 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.610094 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.610336 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.610400 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.610526 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.610632 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.619502 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh"] Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.642373 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8","Type":"ContainerDied","Data":"81b22c5b6800e6e36847758b45bcf5defeebd79b952c691851fc3aa6a7d82f6b"} Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.642436 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81b22c5b6800e6e36847758b45bcf5defeebd79b952c691851fc3aa6a7d82f6b" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.646884 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"49804aeb-dfa6-4f73-82bf-4d8d29792d18","Type":"ContainerStarted","Data":"69898ebacfe9b8f20ad954ad79609ab59a6183b31d682ec435cbf36f0ac1ecdb"} Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.691439 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8st\" (UniqueName: \"kubernetes.io/projected/d48be587-8361-4422-a8eb-0d713cedcae5-kube-api-access-vx8st\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.692024 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-config\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.692078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-client-ca\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.692124 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48be587-8361-4422-a8eb-0d713cedcae5-serving-cert\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.698310 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.793481 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kubelet-dir\") pod \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\" (UID: \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\") " Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.793553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kube-api-access\") pod \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\" (UID: \"f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8\") " Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.793768 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx8st\" (UniqueName: \"kubernetes.io/projected/d48be587-8361-4422-a8eb-0d713cedcae5-kube-api-access-vx8st\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.793849 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-config\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.794129 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8" (UID: "f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.793883 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-client-ca\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.795097 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48be587-8361-4422-a8eb-0d713cedcae5-serving-cert\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.795228 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.796260 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.799248 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.799429 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.803801 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8" (UID: "f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.805643 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-config\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.806618 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-client-ca\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.808661 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48be587-8361-4422-a8eb-0d713cedcae5-serving-cert\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.810405 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.820708 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.835476 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx8st\" (UniqueName: \"kubernetes.io/projected/d48be587-8361-4422-a8eb-0d713cedcae5-kube-api-access-vx8st\") pod \"route-controller-manager-5c5f98fb9b-djtmh\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:17 crc kubenswrapper[4932]: I0321 09:03:17.896431 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:18 crc kubenswrapper[4932]: I0321 09:03:18.063199 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 09:03:18 crc kubenswrapper[4932]: I0321 09:03:18.067808 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:18 crc kubenswrapper[4932]: I0321 09:03:18.668286 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m89fk" event={"ID":"3bdb8dcf-07f0-4961-a04a-01133cbe4788","Type":"ContainerStarted","Data":"b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e"} Mar 21 09:03:18 crc kubenswrapper[4932]: I0321 09:03:18.668368 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 09:03:18 crc kubenswrapper[4932]: I0321 09:03:18.691804 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m89fk" podStartSLOduration=5.048696625 podStartE2EDuration="43.691781802s" podCreationTimestamp="2026-03-21 09:02:35 +0000 UTC" firstStartedPulling="2026-03-21 09:02:38.076878034 +0000 UTC m=+261.672076293" lastFinishedPulling="2026-03-21 09:03:16.719963201 +0000 UTC m=+300.315161470" observedRunningTime="2026-03-21 09:03:18.688435857 +0000 UTC m=+302.283634126" watchObservedRunningTime="2026-03-21 09:03:18.691781802 +0000 UTC m=+302.286980061" Mar 21 09:03:19 crc kubenswrapper[4932]: I0321 09:03:19.621127 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh"] Mar 21 09:03:19 crc kubenswrapper[4932]: W0321 09:03:19.637698 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd48be587_8361_4422_a8eb_0d713cedcae5.slice/crio-27764ce04a18cfad89e2a96a5c4f2429054b1cd31754e7023ddff1aa6a4c766c WatchSource:0}: Error finding container 27764ce04a18cfad89e2a96a5c4f2429054b1cd31754e7023ddff1aa6a4c766c: Status 404 returned error can't find the container with id 27764ce04a18cfad89e2a96a5c4f2429054b1cd31754e7023ddff1aa6a4c766c Mar 21 09:03:19 crc kubenswrapper[4932]: I0321 09:03:19.676938 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctps" event={"ID":"2b21532b-ba09-4bcb-b030-eaad36e4ba20","Type":"ContainerStarted","Data":"f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458"} Mar 21 09:03:19 crc kubenswrapper[4932]: I0321 09:03:19.678526 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" event={"ID":"d48be587-8361-4422-a8eb-0d713cedcae5","Type":"ContainerStarted","Data":"27764ce04a18cfad89e2a96a5c4f2429054b1cd31754e7023ddff1aa6a4c766c"} Mar 21 09:03:19 crc kubenswrapper[4932]: I0321 09:03:19.682224 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2kwc" event={"ID":"29534428-e319-412a-a850-53b180783073","Type":"ContainerStarted","Data":"673c2468511686b34a4cad7e9bbcd4ed5eab37d1964319c6f34ffbe3d8ff8847"} Mar 21 09:03:20 crc kubenswrapper[4932]: I0321 09:03:20.687588 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" event={"ID":"d48be587-8361-4422-a8eb-0d713cedcae5","Type":"ContainerStarted","Data":"6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778"} Mar 21 09:03:20 crc kubenswrapper[4932]: I0321 09:03:20.688186 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:20 crc kubenswrapper[4932]: I0321 09:03:20.688863 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"49804aeb-dfa6-4f73-82bf-4d8d29792d18","Type":"ContainerStarted","Data":"807084e677df5fd22fc5058976904f4bf732811886af0839816e150a04769f2e"} Mar 21 09:03:20 crc kubenswrapper[4932]: I0321 09:03:20.710261 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" podStartSLOduration=8.71023958 podStartE2EDuration="8.71023958s" podCreationTimestamp="2026-03-21 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:20.708407761 +0000 UTC m=+304.303606050" watchObservedRunningTime="2026-03-21 09:03:20.71023958 +0000 UTC m=+304.305437849" Mar 21 09:03:20 crc kubenswrapper[4932]: I0321 09:03:20.711477 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:20 crc kubenswrapper[4932]: I0321 09:03:20.730224 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m2kwc" podStartSLOduration=4.632084641 podStartE2EDuration="46.730198668s" podCreationTimestamp="2026-03-21 09:02:34 +0000 UTC" firstStartedPulling="2026-03-21 09:02:37.076930897 +0000 UTC m=+260.672129166" lastFinishedPulling="2026-03-21 09:03:19.175044924 +0000 UTC m=+302.770243193" observedRunningTime="2026-03-21 09:03:20.727609216 +0000 UTC m=+304.322807505" watchObservedRunningTime="2026-03-21 09:03:20.730198668 +0000 UTC m=+304.325396937" Mar 21 09:03:20 crc kubenswrapper[4932]: I0321 09:03:20.752313 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.752285913 podStartE2EDuration="4.752285913s" podCreationTimestamp="2026-03-21 09:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:20.746893903 +0000 UTC m=+304.342092172" watchObservedRunningTime="2026-03-21 09:03:20.752285913 +0000 UTC m=+304.347484182" Mar 21 09:03:20 crc kubenswrapper[4932]: I0321 09:03:20.767694 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6ctps" podStartSLOduration=4.331674231 podStartE2EDuration="47.767674396s" podCreationTimestamp="2026-03-21 09:02:33 +0000 UTC" firstStartedPulling="2026-03-21 09:02:34.897490476 +0000 UTC m=+258.492688745" lastFinishedPulling="2026-03-21 09:03:18.333490641 +0000 UTC m=+301.928688910" observedRunningTime="2026-03-21 09:03:20.76716218 +0000 UTC m=+304.362360469" watchObservedRunningTime="2026-03-21 09:03:20.767674396 +0000 UTC m=+304.362872665" Mar 21 09:03:21 crc kubenswrapper[4932]: I0321 09:03:21.699333 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbmnj" event={"ID":"bbc42726-34c5-4cd6-b2b4-5e27a325adbd","Type":"ContainerStarted","Data":"c22a40d8bb3fce3526514843e6c1b1f761ab4d70c763e9fe7ccd1de5b8126e23"} Mar 21 09:03:21 crc kubenswrapper[4932]: I0321 09:03:21.731752 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jbmnj" podStartSLOduration=3.717972187 podStartE2EDuration="49.731727454s" podCreationTimestamp="2026-03-21 09:02:32 +0000 UTC" firstStartedPulling="2026-03-21 09:02:34.837276551 +0000 UTC m=+258.432474820" lastFinishedPulling="2026-03-21 09:03:20.851031818 +0000 UTC m=+304.446230087" observedRunningTime="2026-03-21 09:03:21.726959404 +0000 UTC m=+305.322157683" watchObservedRunningTime="2026-03-21 09:03:21.731727454 +0000 UTC m=+305.326925723" Mar 21 09:03:23 crc kubenswrapper[4932]: I0321 09:03:23.300404 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:03:23 crc kubenswrapper[4932]: I0321 09:03:23.300478 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:03:23 crc kubenswrapper[4932]: I0321 09:03:23.720862 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:03:23 crc kubenswrapper[4932]: I0321 09:03:23.721850 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:03:24 crc kubenswrapper[4932]: I0321 09:03:24.039142 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:03:24 crc kubenswrapper[4932]: I0321 09:03:24.042094 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:03:24 crc kubenswrapper[4932]: I0321 09:03:24.763830 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.315802 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.316307 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.362797 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.503923 4932 csr.go:261] certificate signing request csr-mcg7b is approved, waiting to be issued Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.511770 4932 csr.go:257] certificate signing request csr-mcg7b is issued Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.727667 4932 generic.go:334] "Generic (PLEG): container finished" podID="026bb1a2-7881-45a8-8845-53d8bbcb4166" containerID="c2410365de3b696ccaeb9d3701cf7e1ba78e42c30f98637364db715639fea248" exitCode=0 Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.727783 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" event={"ID":"026bb1a2-7881-45a8-8845-53d8bbcb4166","Type":"ContainerDied","Data":"c2410365de3b696ccaeb9d3701cf7e1ba78e42c30f98637364db715639fea248"} Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.735114 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ttjm" event={"ID":"35a4e0fc-1b46-40ea-8c9f-f284960024e6","Type":"ContainerStarted","Data":"d5554802cc70cd1d738206d177b137e45c78ce10f6b16fd1e2c2a8ca1bff583a"} Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.739514 4932 generic.go:334] "Generic (PLEG): container finished" podID="64d9430a-2f41-4dac-bfa7-9fa47a85db9a" containerID="07f4ac30d27cc766ff13280bf7292f6927e9feb5c63be51a683059ef86f34807" exitCode=0 Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.739610 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568060-6lptj" event={"ID":"64d9430a-2f41-4dac-bfa7-9fa47a85db9a","Type":"ContainerDied","Data":"07f4ac30d27cc766ff13280bf7292f6927e9feb5c63be51a683059ef86f34807"} Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.770844 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.770905 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.797902 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.835292 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:03:25 crc kubenswrapper[4932]: I0321 09:03:25.936446 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6ctps"] Mar 21 09:03:26 crc kubenswrapper[4932]: I0321 09:03:26.513725 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-20 01:56:02.221030299 +0000 UTC Mar 21 09:03:26 crc kubenswrapper[4932]: I0321 09:03:26.513794 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7312h52m35.707240928s for next certificate rotation Mar 21 09:03:26 crc kubenswrapper[4932]: I0321 09:03:26.752751 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s775" event={"ID":"b7cf94af-e246-4ff3-92c6-7e184228e57d","Type":"ContainerStarted","Data":"1d370b01cd23a8438f4ac75a72fcfbe94baeb2c978e5ef446610aa861a5d6273"} Mar 21 09:03:26 crc kubenswrapper[4932]: I0321 09:03:26.755515 4932 generic.go:334] "Generic (PLEG): container finished" podID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerID="d5554802cc70cd1d738206d177b137e45c78ce10f6b16fd1e2c2a8ca1bff583a" exitCode=0 Mar 21 09:03:26 crc kubenswrapper[4932]: I0321 09:03:26.755659 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ttjm" event={"ID":"35a4e0fc-1b46-40ea-8c9f-f284960024e6","Type":"ContainerDied","Data":"d5554802cc70cd1d738206d177b137e45c78ce10f6b16fd1e2c2a8ca1bff583a"} Mar 21 09:03:26 crc kubenswrapper[4932]: I0321 09:03:26.818289 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.192921 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568060-6lptj" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.197399 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.356752 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh9v9\" (UniqueName: \"kubernetes.io/projected/026bb1a2-7881-45a8-8845-53d8bbcb4166-kube-api-access-vh9v9\") pod \"026bb1a2-7881-45a8-8845-53d8bbcb4166\" (UID: \"026bb1a2-7881-45a8-8845-53d8bbcb4166\") " Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.356942 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc6tg\" (UniqueName: \"kubernetes.io/projected/64d9430a-2f41-4dac-bfa7-9fa47a85db9a-kube-api-access-jc6tg\") pod \"64d9430a-2f41-4dac-bfa7-9fa47a85db9a\" (UID: \"64d9430a-2f41-4dac-bfa7-9fa47a85db9a\") " Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.363020 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026bb1a2-7881-45a8-8845-53d8bbcb4166-kube-api-access-vh9v9" (OuterVolumeSpecName: "kube-api-access-vh9v9") pod "026bb1a2-7881-45a8-8845-53d8bbcb4166" (UID: "026bb1a2-7881-45a8-8845-53d8bbcb4166"). InnerVolumeSpecName "kube-api-access-vh9v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.363213 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d9430a-2f41-4dac-bfa7-9fa47a85db9a-kube-api-access-jc6tg" (OuterVolumeSpecName: "kube-api-access-jc6tg") pod "64d9430a-2f41-4dac-bfa7-9fa47a85db9a" (UID: "64d9430a-2f41-4dac-bfa7-9fa47a85db9a"). InnerVolumeSpecName "kube-api-access-jc6tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.458558 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc6tg\" (UniqueName: \"kubernetes.io/projected/64d9430a-2f41-4dac-bfa7-9fa47a85db9a-kube-api-access-jc6tg\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.458612 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh9v9\" (UniqueName: \"kubernetes.io/projected/026bb1a2-7881-45a8-8845-53d8bbcb4166-kube-api-access-vh9v9\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.514199 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-06 23:39:13.187074087 +0000 UTC Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.514252 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6254h35m45.672825743s for next certificate rotation Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.736924 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m89fk"] Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.763303 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ttjm" event={"ID":"35a4e0fc-1b46-40ea-8c9f-f284960024e6","Type":"ContainerStarted","Data":"95e1b1e8ce0b1b08b5e0cceb6e864d34bca566da63baf46f9969ce8663e67bec"} Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.768040 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568060-6lptj" event={"ID":"64d9430a-2f41-4dac-bfa7-9fa47a85db9a","Type":"ContainerDied","Data":"a83d27cb9248b45acb70811840e27868f76205f4e1db393772b15f93ff7e59dd"} Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.768077 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a83d27cb9248b45acb70811840e27868f76205f4e1db393772b15f93ff7e59dd" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.768083 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568060-6lptj" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.771085 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" event={"ID":"026bb1a2-7881-45a8-8845-53d8bbcb4166","Type":"ContainerDied","Data":"489cae96f1c983e4c7d97827ecbf5c2e538304608026e5eb1f12aaa484037b4f"} Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.771142 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="489cae96f1c983e4c7d97827ecbf5c2e538304608026e5eb1f12aaa484037b4f" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.771232 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568062-ckbkj" Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.777101 4932 generic.go:334] "Generic (PLEG): container finished" podID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerID="1d370b01cd23a8438f4ac75a72fcfbe94baeb2c978e5ef446610aa861a5d6273" exitCode=0 Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.777184 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s775" event={"ID":"b7cf94af-e246-4ff3-92c6-7e184228e57d","Type":"ContainerDied","Data":"1d370b01cd23a8438f4ac75a72fcfbe94baeb2c978e5ef446610aa861a5d6273"} Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.777551 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6ctps" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerName="registry-server" containerID="cri-o://f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458" gracePeriod=2 Mar 21 09:03:27 crc kubenswrapper[4932]: I0321 09:03:27.783025 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ttjm" podStartSLOduration=3.6419456 podStartE2EDuration="52.783003008s" podCreationTimestamp="2026-03-21 09:02:35 +0000 UTC" firstStartedPulling="2026-03-21 09:02:38.085667301 +0000 UTC m=+261.680865570" lastFinishedPulling="2026-03-21 09:03:27.226724709 +0000 UTC m=+310.821922978" observedRunningTime="2026-03-21 09:03:27.780634944 +0000 UTC m=+311.375833233" watchObservedRunningTime="2026-03-21 09:03:27.783003008 +0000 UTC m=+311.378201277" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.275127 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.371961 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-catalog-content\") pod \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.372117 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n4lk\" (UniqueName: \"kubernetes.io/projected/2b21532b-ba09-4bcb-b030-eaad36e4ba20-kube-api-access-8n4lk\") pod \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.372209 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-utilities\") pod \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\" (UID: \"2b21532b-ba09-4bcb-b030-eaad36e4ba20\") " Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.373577 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-utilities" (OuterVolumeSpecName: "utilities") pod "2b21532b-ba09-4bcb-b030-eaad36e4ba20" (UID: "2b21532b-ba09-4bcb-b030-eaad36e4ba20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.381879 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b21532b-ba09-4bcb-b030-eaad36e4ba20-kube-api-access-8n4lk" (OuterVolumeSpecName: "kube-api-access-8n4lk") pod "2b21532b-ba09-4bcb-b030-eaad36e4ba20" (UID: "2b21532b-ba09-4bcb-b030-eaad36e4ba20"). InnerVolumeSpecName "kube-api-access-8n4lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.438476 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b21532b-ba09-4bcb-b030-eaad36e4ba20" (UID: "2b21532b-ba09-4bcb-b030-eaad36e4ba20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.474759 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.474798 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n4lk\" (UniqueName: \"kubernetes.io/projected/2b21532b-ba09-4bcb-b030-eaad36e4ba20-kube-api-access-8n4lk\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.474814 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b21532b-ba09-4bcb-b030-eaad36e4ba20-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.785632 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerID="f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458" exitCode=0 Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.786188 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctps" event={"ID":"2b21532b-ba09-4bcb-b030-eaad36e4ba20","Type":"ContainerDied","Data":"f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458"} Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.786235 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctps" event={"ID":"2b21532b-ba09-4bcb-b030-eaad36e4ba20","Type":"ContainerDied","Data":"93627ee69d9d2cd39f33d73acaf0ad7b9dac0f3482b178d0fe2d397e01e89381"} Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.786258 4932 scope.go:117] "RemoveContainer" containerID="f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.786412 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ctps" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.789163 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx64c" event={"ID":"4b4f0982-25bd-4a81-b00f-7b35377a893a","Type":"ContainerStarted","Data":"9704b304a58206773164f472b8531c6200388b63e82ab1b3509901f28aaeb999"} Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.792043 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s775" event={"ID":"b7cf94af-e246-4ff3-92c6-7e184228e57d","Type":"ContainerStarted","Data":"4703828ea9eb250cf831aae8b8bb0709984d989d42d2dd952502e4a8dd50a036"} Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.792219 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m89fk" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerName="registry-server" containerID="cri-o://b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e" gracePeriod=2 Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.814243 4932 scope.go:117] "RemoveContainer" containerID="dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.837863 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8s775" podStartSLOduration=2.592516137 podStartE2EDuration="52.837844073s" podCreationTimestamp="2026-03-21 09:02:36 +0000 UTC" firstStartedPulling="2026-03-21 09:02:38.083579956 +0000 UTC m=+261.678778225" lastFinishedPulling="2026-03-21 09:03:28.328907892 +0000 UTC m=+311.924106161" observedRunningTime="2026-03-21 09:03:28.835589591 +0000 UTC m=+312.430787860" watchObservedRunningTime="2026-03-21 09:03:28.837844073 +0000 UTC m=+312.433042332" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.841185 4932 scope.go:117] "RemoveContainer" containerID="dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.863134 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6ctps"] Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.863316 4932 scope.go:117] "RemoveContainer" containerID="f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458" Mar 21 09:03:28 crc kubenswrapper[4932]: E0321 09:03:28.864027 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458\": container with ID starting with f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458 not found: ID does not exist" containerID="f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.864070 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458"} err="failed to get container status \"f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458\": rpc error: code = NotFound desc = could not find container \"f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458\": container with ID starting with f9911079ccb698414d33ca1fae65c963462a03453886c2854dd548aeaef0c458 not found: ID does not exist" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.864396 4932 scope.go:117] "RemoveContainer" containerID="dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305" Mar 21 09:03:28 crc kubenswrapper[4932]: E0321 09:03:28.864855 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305\": container with ID starting with dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305 not found: ID does not exist" containerID="dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.864961 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305"} err="failed to get container status \"dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305\": rpc error: code = NotFound desc = could not find container \"dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305\": container with ID starting with dd831e53590affc7682eaa4a606b1f75bb71ad9ea19eca4a73dcee5bbd853305 not found: ID does not exist" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.865023 4932 scope.go:117] "RemoveContainer" containerID="dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764" Mar 21 09:03:28 crc kubenswrapper[4932]: E0321 09:03:28.866216 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764\": container with ID starting with dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764 not found: ID does not exist" containerID="dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.866363 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764"} err="failed to get container status \"dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764\": rpc error: code = NotFound desc = could not find container \"dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764\": container with ID starting with dca00fbf27ee95fb028e88a477e0d1856249abc3fced9f296b784da96a094764 not found: ID does not exist" Mar 21 09:03:28 crc kubenswrapper[4932]: I0321 09:03:28.869332 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6ctps"] Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.271692 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.388842 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-utilities\") pod \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.389257 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-catalog-content\") pod \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.389303 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtxnh\" (UniqueName: \"kubernetes.io/projected/3bdb8dcf-07f0-4961-a04a-01133cbe4788-kube-api-access-gtxnh\") pod \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\" (UID: \"3bdb8dcf-07f0-4961-a04a-01133cbe4788\") " Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.389779 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-utilities" (OuterVolumeSpecName: "utilities") pod "3bdb8dcf-07f0-4961-a04a-01133cbe4788" (UID: "3bdb8dcf-07f0-4961-a04a-01133cbe4788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.395411 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdb8dcf-07f0-4961-a04a-01133cbe4788-kube-api-access-gtxnh" (OuterVolumeSpecName: "kube-api-access-gtxnh") pod "3bdb8dcf-07f0-4961-a04a-01133cbe4788" (UID: "3bdb8dcf-07f0-4961-a04a-01133cbe4788"). InnerVolumeSpecName "kube-api-access-gtxnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.417436 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bdb8dcf-07f0-4961-a04a-01133cbe4788" (UID: "3bdb8dcf-07f0-4961-a04a-01133cbe4788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.490623 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.490669 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bdb8dcf-07f0-4961-a04a-01133cbe4788-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.490680 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtxnh\" (UniqueName: \"kubernetes.io/projected/3bdb8dcf-07f0-4961-a04a-01133cbe4788-kube-api-access-gtxnh\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.712736 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" path="/var/lib/kubelet/pods/2b21532b-ba09-4bcb-b030-eaad36e4ba20/volumes" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.803751 4932 generic.go:334] "Generic (PLEG): container finished" podID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerID="9704b304a58206773164f472b8531c6200388b63e82ab1b3509901f28aaeb999" exitCode=0 Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.803851 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx64c" event={"ID":"4b4f0982-25bd-4a81-b00f-7b35377a893a","Type":"ContainerDied","Data":"9704b304a58206773164f472b8531c6200388b63e82ab1b3509901f28aaeb999"} Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.809095 4932 generic.go:334] "Generic (PLEG): container finished" podID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerID="b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e" exitCode=0 Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.809249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m89fk" event={"ID":"3bdb8dcf-07f0-4961-a04a-01133cbe4788","Type":"ContainerDied","Data":"b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e"} Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.809414 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m89fk" event={"ID":"3bdb8dcf-07f0-4961-a04a-01133cbe4788","Type":"ContainerDied","Data":"459b4a38f0ea48366d52cbde91ae38b3d5f6bfa5ee7d46611b4210c3edc77ce3"} Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.809275 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m89fk" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.809463 4932 scope.go:117] "RemoveContainer" containerID="b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.823099 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7ztm" event={"ID":"e5133ee9-0d6d-4533-81b4-9d8518eef6c6","Type":"ContainerStarted","Data":"15e4f24b8147e3fb481eba931c9751a19b0f8f712f7646428ccb4680811d195b"} Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.838383 4932 scope.go:117] "RemoveContainer" containerID="6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.846527 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m89fk"] Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.851980 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m89fk"] Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.908073 4932 scope.go:117] "RemoveContainer" containerID="2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.927167 4932 scope.go:117] "RemoveContainer" containerID="b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e" Mar 21 09:03:29 crc kubenswrapper[4932]: E0321 09:03:29.928464 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e\": container with ID starting with b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e not found: ID does not exist" containerID="b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.928517 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e"} err="failed to get container status \"b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e\": rpc error: code = NotFound desc = could not find container \"b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e\": container with ID starting with b698ed7f7f8703dc27d02abe98bbc7e34cc6c2987dacb126b503f2e8b01dd47e not found: ID does not exist" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.928552 4932 scope.go:117] "RemoveContainer" containerID="6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8" Mar 21 09:03:29 crc kubenswrapper[4932]: E0321 09:03:29.928989 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8\": container with ID starting with 6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8 not found: ID does not exist" containerID="6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.929074 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8"} err="failed to get container status \"6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8\": rpc error: code = NotFound desc = could not find container \"6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8\": container with ID starting with 6b5346ec24881b082bd87a7d384f9db1a236c0e90ec68975d1502cbf06a41ee8 not found: ID does not exist" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.929144 4932 scope.go:117] "RemoveContainer" containerID="2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc" Mar 21 09:03:29 crc kubenswrapper[4932]: E0321 09:03:29.929482 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc\": container with ID starting with 2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc not found: ID does not exist" containerID="2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc" Mar 21 09:03:29 crc kubenswrapper[4932]: I0321 09:03:29.929579 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc"} err="failed to get container status \"2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc\": rpc error: code = NotFound desc = could not find container \"2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc\": container with ID starting with 2f91d878d77f45524b7c88a3509795c39784634835ec70cb7fd32272b93188dc not found: ID does not exist" Mar 21 09:03:30 crc kubenswrapper[4932]: I0321 09:03:30.225727 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:03:30 crc kubenswrapper[4932]: I0321 09:03:30.225818 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:03:30 crc kubenswrapper[4932]: I0321 09:03:30.225874 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:03:30 crc kubenswrapper[4932]: I0321 09:03:30.227054 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:03:30 crc kubenswrapper[4932]: I0321 09:03:30.227216 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb" gracePeriod=600 Mar 21 09:03:30 crc kubenswrapper[4932]: I0321 09:03:30.833149 4932 generic.go:334] "Generic (PLEG): container finished" podID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerID="15e4f24b8147e3fb481eba931c9751a19b0f8f712f7646428ccb4680811d195b" exitCode=0 Mar 21 09:03:30 crc kubenswrapper[4932]: I0321 09:03:30.833215 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7ztm" event={"ID":"e5133ee9-0d6d-4533-81b4-9d8518eef6c6","Type":"ContainerDied","Data":"15e4f24b8147e3fb481eba931c9751a19b0f8f712f7646428ccb4680811d195b"} Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.708966 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" path="/var/lib/kubelet/pods/3bdb8dcf-07f0-4961-a04a-01133cbe4788/volumes" Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.845465 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb" exitCode=0 Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.845572 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb"} Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.845715 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"d6d1a99812a9df2e3a0aed95e8032e795b8e981cbae4b19e570cfe3a8c155d8a"} Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.849575 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx64c" event={"ID":"4b4f0982-25bd-4a81-b00f-7b35377a893a","Type":"ContainerStarted","Data":"9b253d0c2573dd28f7279c66daa519534de24d3614e3b41c9dab8b4b58917f71"} Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.851849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7ztm" event={"ID":"e5133ee9-0d6d-4533-81b4-9d8518eef6c6","Type":"ContainerStarted","Data":"39c28885bb22b8dfebd11aed00444775dc987bf2d69cf3e1bb79a180c3b21483"} Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.898437 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gx64c" podStartSLOduration=3.68342656 podStartE2EDuration="59.898401163s" podCreationTimestamp="2026-03-21 09:02:32 +0000 UTC" firstStartedPulling="2026-03-21 09:02:34.843414454 +0000 UTC m=+258.438612723" lastFinishedPulling="2026-03-21 09:03:31.058389057 +0000 UTC m=+314.653587326" observedRunningTime="2026-03-21 09:03:31.896261536 +0000 UTC m=+315.491459815" watchObservedRunningTime="2026-03-21 09:03:31.898401163 +0000 UTC m=+315.493599462" Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.952046 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n7ztm" podStartSLOduration=3.640912381 podStartE2EDuration="58.95202404s" podCreationTimestamp="2026-03-21 09:02:33 +0000 UTC" firstStartedPulling="2026-03-21 09:02:36.026791102 +0000 UTC m=+259.621989381" lastFinishedPulling="2026-03-21 09:03:31.337902761 +0000 UTC m=+314.933101040" observedRunningTime="2026-03-21 09:03:31.951028629 +0000 UTC m=+315.546226918" watchObservedRunningTime="2026-03-21 09:03:31.95202404 +0000 UTC m=+315.547222309" Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.993622 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67459cd59b-cbrpp"] Mar 21 09:03:31 crc kubenswrapper[4932]: I0321 09:03:31.993883 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" podUID="0d63bfb9-aef1-4316-80a6-74e5f5cab65a" containerName="controller-manager" containerID="cri-o://3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1" gracePeriod=30 Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.034195 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh"] Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.034868 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" podUID="d48be587-8361-4422-a8eb-0d713cedcae5" containerName="route-controller-manager" containerID="cri-o://6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778" gracePeriod=30 Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.562534 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.595187 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.647477 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-client-ca\") pod \"d48be587-8361-4422-a8eb-0d713cedcae5\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.647555 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-config\") pod \"d48be587-8361-4422-a8eb-0d713cedcae5\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.647593 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48be587-8361-4422-a8eb-0d713cedcae5-serving-cert\") pod \"d48be587-8361-4422-a8eb-0d713cedcae5\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.647698 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx8st\" (UniqueName: \"kubernetes.io/projected/d48be587-8361-4422-a8eb-0d713cedcae5-kube-api-access-vx8st\") pod \"d48be587-8361-4422-a8eb-0d713cedcae5\" (UID: \"d48be587-8361-4422-a8eb-0d713cedcae5\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.649273 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-client-ca" (OuterVolumeSpecName: "client-ca") pod "d48be587-8361-4422-a8eb-0d713cedcae5" (UID: "d48be587-8361-4422-a8eb-0d713cedcae5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.650044 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-config" (OuterVolumeSpecName: "config") pod "d48be587-8361-4422-a8eb-0d713cedcae5" (UID: "d48be587-8361-4422-a8eb-0d713cedcae5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.655702 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48be587-8361-4422-a8eb-0d713cedcae5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d48be587-8361-4422-a8eb-0d713cedcae5" (UID: "d48be587-8361-4422-a8eb-0d713cedcae5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.656230 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48be587-8361-4422-a8eb-0d713cedcae5-kube-api-access-vx8st" (OuterVolumeSpecName: "kube-api-access-vx8st") pod "d48be587-8361-4422-a8eb-0d713cedcae5" (UID: "d48be587-8361-4422-a8eb-0d713cedcae5"). InnerVolumeSpecName "kube-api-access-vx8st". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749053 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-client-ca\") pod \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749145 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-proxy-ca-bundles\") pod \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749206 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-serving-cert\") pod \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749296 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-config\") pod \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749374 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxk6\" (UniqueName: \"kubernetes.io/projected/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-kube-api-access-rnxk6\") pod \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\" (UID: \"0d63bfb9-aef1-4316-80a6-74e5f5cab65a\") " Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749779 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48be587-8361-4422-a8eb-0d713cedcae5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749807 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx8st\" (UniqueName: \"kubernetes.io/projected/d48be587-8361-4422-a8eb-0d713cedcae5-kube-api-access-vx8st\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749823 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.749837 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48be587-8361-4422-a8eb-0d713cedcae5-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.750420 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d63bfb9-aef1-4316-80a6-74e5f5cab65a" (UID: "0d63bfb9-aef1-4316-80a6-74e5f5cab65a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.750561 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-config" (OuterVolumeSpecName: "config") pod "0d63bfb9-aef1-4316-80a6-74e5f5cab65a" (UID: "0d63bfb9-aef1-4316-80a6-74e5f5cab65a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.750975 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d63bfb9-aef1-4316-80a6-74e5f5cab65a" (UID: "0d63bfb9-aef1-4316-80a6-74e5f5cab65a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.753723 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d63bfb9-aef1-4316-80a6-74e5f5cab65a" (UID: "0d63bfb9-aef1-4316-80a6-74e5f5cab65a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.753710 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-kube-api-access-rnxk6" (OuterVolumeSpecName: "kube-api-access-rnxk6") pod "0d63bfb9-aef1-4316-80a6-74e5f5cab65a" (UID: "0d63bfb9-aef1-4316-80a6-74e5f5cab65a"). InnerVolumeSpecName "kube-api-access-rnxk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.851131 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.851181 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.851194 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.851205 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.851215 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxk6\" (UniqueName: \"kubernetes.io/projected/0d63bfb9-aef1-4316-80a6-74e5f5cab65a-kube-api-access-rnxk6\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.859023 4932 generic.go:334] "Generic (PLEG): container finished" podID="d48be587-8361-4422-a8eb-0d713cedcae5" containerID="6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778" exitCode=0 Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.859117 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" event={"ID":"d48be587-8361-4422-a8eb-0d713cedcae5","Type":"ContainerDied","Data":"6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778"} Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.859212 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" event={"ID":"d48be587-8361-4422-a8eb-0d713cedcae5","Type":"ContainerDied","Data":"27764ce04a18cfad89e2a96a5c4f2429054b1cd31754e7023ddff1aa6a4c766c"} Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.859247 4932 scope.go:117] "RemoveContainer" containerID="6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.859359 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.862324 4932 generic.go:334] "Generic (PLEG): container finished" podID="0d63bfb9-aef1-4316-80a6-74e5f5cab65a" containerID="3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1" exitCode=0 Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.862372 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" event={"ID":"0d63bfb9-aef1-4316-80a6-74e5f5cab65a","Type":"ContainerDied","Data":"3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1"} Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.862387 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" event={"ID":"0d63bfb9-aef1-4316-80a6-74e5f5cab65a","Type":"ContainerDied","Data":"a4e1b50c5c1d55478cc42d4c17a1b7f23610aaa1a1c3ff5519c670f9cbc1ae21"} Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.862441 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67459cd59b-cbrpp" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.878996 4932 scope.go:117] "RemoveContainer" containerID="6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778" Mar 21 09:03:32 crc kubenswrapper[4932]: E0321 09:03:32.880368 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778\": container with ID starting with 6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778 not found: ID does not exist" containerID="6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.880483 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778"} err="failed to get container status \"6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778\": rpc error: code = NotFound desc = could not find container \"6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778\": container with ID starting with 6202c27eaed08fabff6e00e6e6113e3515f292b19746328c686103229b0b3778 not found: ID does not exist" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.880580 4932 scope.go:117] "RemoveContainer" containerID="3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.894459 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh"] Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.897140 4932 scope.go:117] "RemoveContainer" containerID="3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1" Mar 21 09:03:32 crc kubenswrapper[4932]: E0321 09:03:32.897855 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1\": container with ID starting with 3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1 not found: ID does not exist" containerID="3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.897950 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1"} err="failed to get container status \"3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1\": rpc error: code = NotFound desc = could not find container \"3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1\": container with ID starting with 3a122fc648e038ef8cf359f1d6911f9c43f84c6685e92d613c601bf51146dca1 not found: ID does not exist" Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.906053 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c5f98fb9b-djtmh"] Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.917567 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67459cd59b-cbrpp"] Mar 21 09:03:32 crc kubenswrapper[4932]: I0321 09:03:32.921182 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67459cd59b-cbrpp"] Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.119669 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.119799 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.354428 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.612625 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2"] Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.612946 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerName="registry-server" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.612968 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerName="registry-server" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613016 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerName="extract-utilities" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613026 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerName="extract-utilities" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613038 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d63bfb9-aef1-4316-80a6-74e5f5cab65a" containerName="controller-manager" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613047 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d63bfb9-aef1-4316-80a6-74e5f5cab65a" containerName="controller-manager" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613061 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerName="registry-server" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613069 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerName="registry-server" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613082 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerName="extract-content" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613092 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerName="extract-content" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613104 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerName="extract-content" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613112 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerName="extract-content" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613125 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d9430a-2f41-4dac-bfa7-9fa47a85db9a" containerName="oc" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613133 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d9430a-2f41-4dac-bfa7-9fa47a85db9a" containerName="oc" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613149 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026bb1a2-7881-45a8-8845-53d8bbcb4166" containerName="oc" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613158 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="026bb1a2-7881-45a8-8845-53d8bbcb4166" containerName="oc" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613172 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48be587-8361-4422-a8eb-0d713cedcae5" containerName="route-controller-manager" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613180 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48be587-8361-4422-a8eb-0d713cedcae5" containerName="route-controller-manager" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613192 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerName="extract-utilities" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613201 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerName="extract-utilities" Mar 21 09:03:33 crc kubenswrapper[4932]: E0321 09:03:33.613213 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8" containerName="pruner" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613221 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8" containerName="pruner" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613404 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d63bfb9-aef1-4316-80a6-74e5f5cab65a" containerName="controller-manager" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613426 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53f7a8f-79d4-4aab-b0bc-ba195a2f9ac8" containerName="pruner" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613445 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdb8dcf-07f0-4961-a04a-01133cbe4788" containerName="registry-server" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613457 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b21532b-ba09-4bcb-b030-eaad36e4ba20" containerName="registry-server" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613468 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="026bb1a2-7881-45a8-8845-53d8bbcb4166" containerName="oc" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613477 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d9430a-2f41-4dac-bfa7-9fa47a85db9a" containerName="oc" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613492 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48be587-8361-4422-a8eb-0d713cedcae5" containerName="route-controller-manager" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.613974 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.614984 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h"] Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.615702 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.617198 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.617218 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.617265 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.626582 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.626866 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.627524 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.633422 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.633503 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.633564 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.633677 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.633569 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.633977 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.634120 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.634332 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.645625 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.659319 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2"] Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.688695 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.690286 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h"] Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.721774 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d63bfb9-aef1-4316-80a6-74e5f5cab65a" path="/var/lib/kubelet/pods/0d63bfb9-aef1-4316-80a6-74e5f5cab65a/volumes" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.722701 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48be587-8361-4422-a8eb-0d713cedcae5" path="/var/lib/kubelet/pods/d48be587-8361-4422-a8eb-0d713cedcae5/volumes" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763122 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-config\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763191 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbw7k\" (UniqueName: \"kubernetes.io/projected/93f23f97-ccef-4a47-98b2-95dd31910260-kube-api-access-hbw7k\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-client-ca\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763463 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f23f97-ccef-4a47-98b2-95dd31910260-serving-cert\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763486 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-proxy-ca-bundles\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763524 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-config\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7xt\" (UniqueName: \"kubernetes.io/projected/5e63f470-9fba-4c98-971f-266ecb0d1ab1-kube-api-access-gv7xt\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763598 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e63f470-9fba-4c98-971f-266ecb0d1ab1-serving-cert\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.763632 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-client-ca\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.864875 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-client-ca\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.864985 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f23f97-ccef-4a47-98b2-95dd31910260-serving-cert\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.865004 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-proxy-ca-bundles\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.865041 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-config\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.865072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv7xt\" (UniqueName: \"kubernetes.io/projected/5e63f470-9fba-4c98-971f-266ecb0d1ab1-kube-api-access-gv7xt\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.865094 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e63f470-9fba-4c98-971f-266ecb0d1ab1-serving-cert\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.865112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-client-ca\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.865169 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-config\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.865189 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbw7k\" (UniqueName: \"kubernetes.io/projected/93f23f97-ccef-4a47-98b2-95dd31910260-kube-api-access-hbw7k\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.866646 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-client-ca\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.868790 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-config\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.868876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-proxy-ca-bundles\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.869540 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-config\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.869598 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-client-ca\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.873189 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e63f470-9fba-4c98-971f-266ecb0d1ab1-serving-cert\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.873297 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f23f97-ccef-4a47-98b2-95dd31910260-serving-cert\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.889031 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbw7k\" (UniqueName: \"kubernetes.io/projected/93f23f97-ccef-4a47-98b2-95dd31910260-kube-api-access-hbw7k\") pod \"controller-manager-74cfd4b58f-4tfv2\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.889820 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv7xt\" (UniqueName: \"kubernetes.io/projected/5e63f470-9fba-4c98-971f-266ecb0d1ab1-kube-api-access-gv7xt\") pod \"route-controller-manager-787886d75f-72j9h\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.943133 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:33 crc kubenswrapper[4932]: I0321 09:03:33.957654 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.174333 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gx64c" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="registry-server" probeResult="failure" output=< Mar 21 09:03:34 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 09:03:34 crc kubenswrapper[4932]: > Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.210456 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2"] Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.258933 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h"] Mar 21 09:03:34 crc kubenswrapper[4932]: W0321 09:03:34.265939 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e63f470_9fba_4c98_971f_266ecb0d1ab1.slice/crio-428b360f2a8cb5f50f5af2e16097ead2b1784a82843b9e5432e3e001858eb22c WatchSource:0}: Error finding container 428b360f2a8cb5f50f5af2e16097ead2b1784a82843b9e5432e3e001858eb22c: Status 404 returned error can't find the container with id 428b360f2a8cb5f50f5af2e16097ead2b1784a82843b9e5432e3e001858eb22c Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.889409 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" event={"ID":"93f23f97-ccef-4a47-98b2-95dd31910260","Type":"ContainerStarted","Data":"b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c"} Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.891461 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" event={"ID":"93f23f97-ccef-4a47-98b2-95dd31910260","Type":"ContainerStarted","Data":"32d263aac8df5f035d1ef4b946ec0f17f14c73fd1bc7c2d9ee2c88fe26e79a7d"} Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.891605 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.892993 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" event={"ID":"5e63f470-9fba-4c98-971f-266ecb0d1ab1","Type":"ContainerStarted","Data":"19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4"} Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.893124 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" event={"ID":"5e63f470-9fba-4c98-971f-266ecb0d1ab1","Type":"ContainerStarted","Data":"428b360f2a8cb5f50f5af2e16097ead2b1784a82843b9e5432e3e001858eb22c"} Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.897799 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.922599 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" podStartSLOduration=2.92257944 podStartE2EDuration="2.92257944s" podCreationTimestamp="2026-03-21 09:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:34.921483016 +0000 UTC m=+318.516681295" watchObservedRunningTime="2026-03-21 09:03:34.92257944 +0000 UTC m=+318.517777719" Mar 21 09:03:34 crc kubenswrapper[4932]: I0321 09:03:34.947663 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" podStartSLOduration=2.947640768 podStartE2EDuration="2.947640768s" podCreationTimestamp="2026-03-21 09:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:34.946377299 +0000 UTC m=+318.541575588" watchObservedRunningTime="2026-03-21 09:03:34.947640768 +0000 UTC m=+318.542839037" Mar 21 09:03:35 crc kubenswrapper[4932]: I0321 09:03:35.904201 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:35 crc kubenswrapper[4932]: I0321 09:03:35.915789 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:36 crc kubenswrapper[4932]: I0321 09:03:36.407090 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:03:36 crc kubenswrapper[4932]: I0321 09:03:36.407183 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:03:36 crc kubenswrapper[4932]: I0321 09:03:36.457879 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:03:36 crc kubenswrapper[4932]: I0321 09:03:36.834277 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:03:36 crc kubenswrapper[4932]: I0321 09:03:36.834480 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:03:36 crc kubenswrapper[4932]: I0321 09:03:36.892026 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:03:36 crc kubenswrapper[4932]: I0321 09:03:36.954493 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:03:36 crc kubenswrapper[4932]: I0321 09:03:36.962906 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:03:39 crc kubenswrapper[4932]: I0321 09:03:39.135184 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s775"] Mar 21 09:03:39 crc kubenswrapper[4932]: I0321 09:03:39.135551 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8s775" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerName="registry-server" containerID="cri-o://4703828ea9eb250cf831aae8b8bb0709984d989d42d2dd952502e4a8dd50a036" gracePeriod=2 Mar 21 09:03:40 crc kubenswrapper[4932]: E0321 09:03:40.324972 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7cf94af_e246_4ff3_92c6_7e184228e57d.slice/crio-conmon-4703828ea9eb250cf831aae8b8bb0709984d989d42d2dd952502e4a8dd50a036.scope\": RecentStats: unable to find data in memory cache]" Mar 21 09:03:40 crc kubenswrapper[4932]: I0321 09:03:40.933327 4932 generic.go:334] "Generic (PLEG): container finished" podID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerID="4703828ea9eb250cf831aae8b8bb0709984d989d42d2dd952502e4a8dd50a036" exitCode=0 Mar 21 09:03:40 crc kubenswrapper[4932]: I0321 09:03:40.933394 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s775" event={"ID":"b7cf94af-e246-4ff3-92c6-7e184228e57d","Type":"ContainerDied","Data":"4703828ea9eb250cf831aae8b8bb0709984d989d42d2dd952502e4a8dd50a036"} Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.270139 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.395625 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-catalog-content\") pod \"b7cf94af-e246-4ff3-92c6-7e184228e57d\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.395719 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jgmw\" (UniqueName: \"kubernetes.io/projected/b7cf94af-e246-4ff3-92c6-7e184228e57d-kube-api-access-5jgmw\") pod \"b7cf94af-e246-4ff3-92c6-7e184228e57d\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.395751 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-utilities\") pod \"b7cf94af-e246-4ff3-92c6-7e184228e57d\" (UID: \"b7cf94af-e246-4ff3-92c6-7e184228e57d\") " Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.397082 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-utilities" (OuterVolumeSpecName: "utilities") pod "b7cf94af-e246-4ff3-92c6-7e184228e57d" (UID: "b7cf94af-e246-4ff3-92c6-7e184228e57d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.403008 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7cf94af-e246-4ff3-92c6-7e184228e57d-kube-api-access-5jgmw" (OuterVolumeSpecName: "kube-api-access-5jgmw") pod "b7cf94af-e246-4ff3-92c6-7e184228e57d" (UID: "b7cf94af-e246-4ff3-92c6-7e184228e57d"). InnerVolumeSpecName "kube-api-access-5jgmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.498494 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jgmw\" (UniqueName: \"kubernetes.io/projected/b7cf94af-e246-4ff3-92c6-7e184228e57d-kube-api-access-5jgmw\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.498530 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.524492 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7cf94af-e246-4ff3-92c6-7e184228e57d" (UID: "b7cf94af-e246-4ff3-92c6-7e184228e57d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.600934 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cf94af-e246-4ff3-92c6-7e184228e57d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.944909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s775" event={"ID":"b7cf94af-e246-4ff3-92c6-7e184228e57d","Type":"ContainerDied","Data":"28f769933eb58fa8fc85d3844197fd4be101cebc84bc30fc297195e2f489401c"} Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.944993 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s775" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.944998 4932 scope.go:117] "RemoveContainer" containerID="4703828ea9eb250cf831aae8b8bb0709984d989d42d2dd952502e4a8dd50a036" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.973093 4932 scope.go:117] "RemoveContainer" containerID="1d370b01cd23a8438f4ac75a72fcfbe94baeb2c978e5ef446610aa861a5d6273" Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.981137 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s775"] Mar 21 09:03:41 crc kubenswrapper[4932]: I0321 09:03:41.989117 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8s775"] Mar 21 09:03:42 crc kubenswrapper[4932]: I0321 09:03:42.005324 4932 scope.go:117] "RemoveContainer" containerID="faca934e3ca95df8da1445b427c6bf9bbb783c027cfcd3367f21b8f6159a71bc" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.161088 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.202249 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.677052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.711425 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" path="/var/lib/kubelet/pods/b7cf94af-e246-4ff3-92c6-7e184228e57d/volumes" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.737058 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.737176 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.739761 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.740510 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.755667 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.818510 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.838244 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.838386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.852943 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.860101 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.876077 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:03:43 crc kubenswrapper[4932]: I0321 09:03:43.876116 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.020745 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.038498 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.048643 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 09:03:44 crc kubenswrapper[4932]: W0321 09:03:44.511866 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b36602702f73de9ce2702ef3f2895725575ff670196fa598222498a72396228f WatchSource:0}: Error finding container b36602702f73de9ce2702ef3f2895725575ff670196fa598222498a72396228f: Status 404 returned error can't find the container with id b36602702f73de9ce2702ef3f2895725575ff670196fa598222498a72396228f Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.532459 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7ztm"] Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.532805 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n7ztm" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerName="registry-server" containerID="cri-o://39c28885bb22b8dfebd11aed00444775dc987bf2d69cf3e1bb79a180c3b21483" gracePeriod=2 Mar 21 09:03:44 crc kubenswrapper[4932]: W0321 09:03:44.573800 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-99fa86e499000539a12b237aeac5f97f4378b2f2b55c5826d86c9393e8a92ac0 WatchSource:0}: Error finding container 99fa86e499000539a12b237aeac5f97f4378b2f2b55c5826d86c9393e8a92ac0: Status 404 returned error can't find the container with id 99fa86e499000539a12b237aeac5f97f4378b2f2b55c5826d86c9393e8a92ac0 Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.744800 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wt26g"] Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.963942 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"99fa86e499000539a12b237aeac5f97f4378b2f2b55c5826d86c9393e8a92ac0"} Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.965170 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"152e953fcde5e04f11edab4f4568df707b1874f6ebabbfec2d18963a9ceb746e"} Mar 21 09:03:44 crc kubenswrapper[4932]: I0321 09:03:44.966424 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b36602702f73de9ce2702ef3f2895725575ff670196fa598222498a72396228f"} Mar 21 09:03:45 crc kubenswrapper[4932]: I0321 09:03:45.973535 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"42e22f237166d2ed7ef65d5e2526302eae5988ff300f56e0cee4b66dc08d3d69"} Mar 21 09:03:45 crc kubenswrapper[4932]: I0321 09:03:45.975057 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8b30b7e4ffa22b3e8c297da5824094196d66b3d204219ae8eec02870d0f753bb"} Mar 21 09:03:45 crc kubenswrapper[4932]: I0321 09:03:45.975196 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:03:45 crc kubenswrapper[4932]: I0321 09:03:45.980417 4932 generic.go:334] "Generic (PLEG): container finished" podID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerID="39c28885bb22b8dfebd11aed00444775dc987bf2d69cf3e1bb79a180c3b21483" exitCode=0 Mar 21 09:03:45 crc kubenswrapper[4932]: I0321 09:03:45.980503 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7ztm" event={"ID":"e5133ee9-0d6d-4533-81b4-9d8518eef6c6","Type":"ContainerDied","Data":"39c28885bb22b8dfebd11aed00444775dc987bf2d69cf3e1bb79a180c3b21483"} Mar 21 09:03:45 crc kubenswrapper[4932]: I0321 09:03:45.981853 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ff3757bfe5a43ad98f2e7b81504e2700d113c96f6c2aba30075aa5c3bedf20a9"} Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.050609 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.171531 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2lqj\" (UniqueName: \"kubernetes.io/projected/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-kube-api-access-v2lqj\") pod \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.171783 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-utilities\") pod \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.171920 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-catalog-content\") pod \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\" (UID: \"e5133ee9-0d6d-4533-81b4-9d8518eef6c6\") " Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.172552 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-utilities" (OuterVolumeSpecName: "utilities") pod "e5133ee9-0d6d-4533-81b4-9d8518eef6c6" (UID: "e5133ee9-0d6d-4533-81b4-9d8518eef6c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.184476 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-kube-api-access-v2lqj" (OuterVolumeSpecName: "kube-api-access-v2lqj") pod "e5133ee9-0d6d-4533-81b4-9d8518eef6c6" (UID: "e5133ee9-0d6d-4533-81b4-9d8518eef6c6"). InnerVolumeSpecName "kube-api-access-v2lqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.244893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5133ee9-0d6d-4533-81b4-9d8518eef6c6" (UID: "e5133ee9-0d6d-4533-81b4-9d8518eef6c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.274423 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.274464 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2lqj\" (UniqueName: \"kubernetes.io/projected/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-kube-api-access-v2lqj\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.274479 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5133ee9-0d6d-4533-81b4-9d8518eef6c6-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.990340 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7ztm" event={"ID":"e5133ee9-0d6d-4533-81b4-9d8518eef6c6","Type":"ContainerDied","Data":"aa126a7b04ebcd87808d9d1110c908b5135b27ed56555a3f74daa6e0076f9b18"} Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.991568 4932 scope.go:117] "RemoveContainer" containerID="39c28885bb22b8dfebd11aed00444775dc987bf2d69cf3e1bb79a180c3b21483" Mar 21 09:03:46 crc kubenswrapper[4932]: I0321 09:03:46.990469 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7ztm" Mar 21 09:03:47 crc kubenswrapper[4932]: I0321 09:03:47.012326 4932 scope.go:117] "RemoveContainer" containerID="15e4f24b8147e3fb481eba931c9751a19b0f8f712f7646428ccb4680811d195b" Mar 21 09:03:47 crc kubenswrapper[4932]: I0321 09:03:47.029048 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7ztm"] Mar 21 09:03:47 crc kubenswrapper[4932]: I0321 09:03:47.037340 4932 scope.go:117] "RemoveContainer" containerID="e1ddf6afd4b3dec42dc10db86900216ec2eca506c8f489392dfd2d9c37778b65" Mar 21 09:03:47 crc kubenswrapper[4932]: I0321 09:03:47.038735 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n7ztm"] Mar 21 09:03:47 crc kubenswrapper[4932]: I0321 09:03:47.711819 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" path="/var/lib/kubelet/pods/e5133ee9-0d6d-4533-81b4-9d8518eef6c6/volumes" Mar 21 09:03:51 crc kubenswrapper[4932]: I0321 09:03:51.969491 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2"] Mar 21 09:03:51 crc kubenswrapper[4932]: I0321 09:03:51.970001 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" podUID="93f23f97-ccef-4a47-98b2-95dd31910260" containerName="controller-manager" containerID="cri-o://b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c" gracePeriod=30 Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.064400 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h"] Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.065621 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" podUID="5e63f470-9fba-4c98-971f-266ecb0d1ab1" containerName="route-controller-manager" containerID="cri-o://19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4" gracePeriod=30 Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.532309 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.582872 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.583825 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e63f470-9fba-4c98-971f-266ecb0d1ab1-serving-cert\") pod \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.583892 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-client-ca\") pod \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.583988 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-config\") pod \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.584098 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv7xt\" (UniqueName: \"kubernetes.io/projected/5e63f470-9fba-4c98-971f-266ecb0d1ab1-kube-api-access-gv7xt\") pod \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\" (UID: \"5e63f470-9fba-4c98-971f-266ecb0d1ab1\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.584992 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e63f470-9fba-4c98-971f-266ecb0d1ab1" (UID: "5e63f470-9fba-4c98-971f-266ecb0d1ab1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.585197 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-config" (OuterVolumeSpecName: "config") pod "5e63f470-9fba-4c98-971f-266ecb0d1ab1" (UID: "5e63f470-9fba-4c98-971f-266ecb0d1ab1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.594880 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e63f470-9fba-4c98-971f-266ecb0d1ab1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e63f470-9fba-4c98-971f-266ecb0d1ab1" (UID: "5e63f470-9fba-4c98-971f-266ecb0d1ab1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.594861 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e63f470-9fba-4c98-971f-266ecb0d1ab1-kube-api-access-gv7xt" (OuterVolumeSpecName: "kube-api-access-gv7xt") pod "5e63f470-9fba-4c98-971f-266ecb0d1ab1" (UID: "5e63f470-9fba-4c98-971f-266ecb0d1ab1"). InnerVolumeSpecName "kube-api-access-gv7xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.685381 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-proxy-ca-bundles\") pod \"93f23f97-ccef-4a47-98b2-95dd31910260\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.685486 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-config\") pod \"93f23f97-ccef-4a47-98b2-95dd31910260\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.685558 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-client-ca\") pod \"93f23f97-ccef-4a47-98b2-95dd31910260\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.685620 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f23f97-ccef-4a47-98b2-95dd31910260-serving-cert\") pod \"93f23f97-ccef-4a47-98b2-95dd31910260\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.685707 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbw7k\" (UniqueName: \"kubernetes.io/projected/93f23f97-ccef-4a47-98b2-95dd31910260-kube-api-access-hbw7k\") pod \"93f23f97-ccef-4a47-98b2-95dd31910260\" (UID: \"93f23f97-ccef-4a47-98b2-95dd31910260\") " Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.686510 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.686552 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv7xt\" (UniqueName: \"kubernetes.io/projected/5e63f470-9fba-4c98-971f-266ecb0d1ab1-kube-api-access-gv7xt\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.686568 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e63f470-9fba-4c98-971f-266ecb0d1ab1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.686581 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e63f470-9fba-4c98-971f-266ecb0d1ab1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.686803 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-client-ca" (OuterVolumeSpecName: "client-ca") pod "93f23f97-ccef-4a47-98b2-95dd31910260" (UID: "93f23f97-ccef-4a47-98b2-95dd31910260"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.686900 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93f23f97-ccef-4a47-98b2-95dd31910260" (UID: "93f23f97-ccef-4a47-98b2-95dd31910260"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.686997 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-config" (OuterVolumeSpecName: "config") pod "93f23f97-ccef-4a47-98b2-95dd31910260" (UID: "93f23f97-ccef-4a47-98b2-95dd31910260"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.689236 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f23f97-ccef-4a47-98b2-95dd31910260-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93f23f97-ccef-4a47-98b2-95dd31910260" (UID: "93f23f97-ccef-4a47-98b2-95dd31910260"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.689804 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f23f97-ccef-4a47-98b2-95dd31910260-kube-api-access-hbw7k" (OuterVolumeSpecName: "kube-api-access-hbw7k") pod "93f23f97-ccef-4a47-98b2-95dd31910260" (UID: "93f23f97-ccef-4a47-98b2-95dd31910260"). InnerVolumeSpecName "kube-api-access-hbw7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.788540 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.788587 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.788597 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f23f97-ccef-4a47-98b2-95dd31910260-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.788607 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f23f97-ccef-4a47-98b2-95dd31910260-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:52 crc kubenswrapper[4932]: I0321 09:03:52.788617 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbw7k\" (UniqueName: \"kubernetes.io/projected/93f23f97-ccef-4a47-98b2-95dd31910260-kube-api-access-hbw7k\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.033302 4932 generic.go:334] "Generic (PLEG): container finished" podID="93f23f97-ccef-4a47-98b2-95dd31910260" containerID="b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c" exitCode=0 Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.033385 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" event={"ID":"93f23f97-ccef-4a47-98b2-95dd31910260","Type":"ContainerDied","Data":"b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c"} Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.033431 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.034107 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2" event={"ID":"93f23f97-ccef-4a47-98b2-95dd31910260","Type":"ContainerDied","Data":"32d263aac8df5f035d1ef4b946ec0f17f14c73fd1bc7c2d9ee2c88fe26e79a7d"} Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.034144 4932 scope.go:117] "RemoveContainer" containerID="b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.035839 4932 generic.go:334] "Generic (PLEG): container finished" podID="5e63f470-9fba-4c98-971f-266ecb0d1ab1" containerID="19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4" exitCode=0 Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.035894 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" event={"ID":"5e63f470-9fba-4c98-971f-266ecb0d1ab1","Type":"ContainerDied","Data":"19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4"} Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.035933 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" event={"ID":"5e63f470-9fba-4c98-971f-266ecb0d1ab1","Type":"ContainerDied","Data":"428b360f2a8cb5f50f5af2e16097ead2b1784a82843b9e5432e3e001858eb22c"} Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.035950 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.057471 4932 scope.go:117] "RemoveContainer" containerID="b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.058629 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c\": container with ID starting with b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c not found: ID does not exist" containerID="b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.058698 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c"} err="failed to get container status \"b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c\": rpc error: code = NotFound desc = could not find container \"b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c\": container with ID starting with b2ef1710a4890b08fd63b1ca2b245297d1592d58ceb859c0ea286d5f5266db4c not found: ID does not exist" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.058732 4932 scope.go:117] "RemoveContainer" containerID="19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.072994 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2"] Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.080881 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74cfd4b58f-4tfv2"] Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.086340 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h"] Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.090558 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787886d75f-72j9h"] Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.092879 4932 scope.go:117] "RemoveContainer" containerID="19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.093786 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4\": container with ID starting with 19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4 not found: ID does not exist" containerID="19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.093847 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4"} err="failed to get container status \"19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4\": rpc error: code = NotFound desc = could not find container \"19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4\": container with ID starting with 19cca9246294b25e431d7f5e1f13984ef4401219a6bc97f3fe537bfd344cd7e4 not found: ID does not exist" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.628327 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz"] Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.629190 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerName="extract-utilities" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.629385 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerName="extract-utilities" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.629532 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerName="registry-server" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.629666 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerName="registry-server" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.629788 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerName="registry-server" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.629920 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerName="registry-server" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.630193 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e63f470-9fba-4c98-971f-266ecb0d1ab1" containerName="route-controller-manager" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.630319 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e63f470-9fba-4c98-971f-266ecb0d1ab1" containerName="route-controller-manager" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.630477 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f23f97-ccef-4a47-98b2-95dd31910260" containerName="controller-manager" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.630596 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f23f97-ccef-4a47-98b2-95dd31910260" containerName="controller-manager" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.630726 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerName="extract-content" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.630858 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerName="extract-content" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.630991 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerName="extract-utilities" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.631099 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerName="extract-utilities" Mar 21 09:03:53 crc kubenswrapper[4932]: E0321 09:03:53.631205 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerName="extract-content" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.631321 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerName="extract-content" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.631681 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e63f470-9fba-4c98-971f-266ecb0d1ab1" containerName="route-controller-manager" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.631811 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5133ee9-0d6d-4533-81b4-9d8518eef6c6" containerName="registry-server" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.631943 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f23f97-ccef-4a47-98b2-95dd31910260" containerName="controller-manager" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.632057 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7cf94af-e246-4ff3-92c6-7e184228e57d" containerName="registry-server" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.632925 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.634929 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4"] Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.635955 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.637460 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.637516 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.637934 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.638056 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.638142 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.641660 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.642061 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.644257 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.644900 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.651991 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.654566 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.654952 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.658592 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz"] Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.660557 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.674375 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4"] Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.701901 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-client-ca\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.701976 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56239543-8abd-4c42-9d6b-8c430375a815-serving-cert\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.702018 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-config\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.702122 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-proxy-ca-bundles\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.702157 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e44c40-9633-4097-ba0d-e0c6f483d5cf-config\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.702235 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds7bj\" (UniqueName: \"kubernetes.io/projected/56239543-8abd-4c42-9d6b-8c430375a815-kube-api-access-ds7bj\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.702289 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e44c40-9633-4097-ba0d-e0c6f483d5cf-serving-cert\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.702332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xls9v\" (UniqueName: \"kubernetes.io/projected/34e44c40-9633-4097-ba0d-e0c6f483d5cf-kube-api-access-xls9v\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.702438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34e44c40-9633-4097-ba0d-e0c6f483d5cf-client-ca\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.710486 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e63f470-9fba-4c98-971f-266ecb0d1ab1" path="/var/lib/kubelet/pods/5e63f470-9fba-4c98-971f-266ecb0d1ab1/volumes" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.711567 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f23f97-ccef-4a47-98b2-95dd31910260" path="/var/lib/kubelet/pods/93f23f97-ccef-4a47-98b2-95dd31910260/volumes" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.803649 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e44c40-9633-4097-ba0d-e0c6f483d5cf-serving-cert\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.804157 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xls9v\" (UniqueName: \"kubernetes.io/projected/34e44c40-9633-4097-ba0d-e0c6f483d5cf-kube-api-access-xls9v\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.804431 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34e44c40-9633-4097-ba0d-e0c6f483d5cf-client-ca\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.804706 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-client-ca\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.804945 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-config\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.805143 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56239543-8abd-4c42-9d6b-8c430375a815-serving-cert\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.805447 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-proxy-ca-bundles\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.805621 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e44c40-9633-4097-ba0d-e0c6f483d5cf-config\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.805803 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds7bj\" (UniqueName: \"kubernetes.io/projected/56239543-8abd-4c42-9d6b-8c430375a815-kube-api-access-ds7bj\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.805920 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34e44c40-9633-4097-ba0d-e0c6f483d5cf-client-ca\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.806007 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-client-ca\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.806740 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-proxy-ca-bundles\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.807990 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56239543-8abd-4c42-9d6b-8c430375a815-config\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.808002 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e44c40-9633-4097-ba0d-e0c6f483d5cf-config\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.812924 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56239543-8abd-4c42-9d6b-8c430375a815-serving-cert\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.818209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34e44c40-9633-4097-ba0d-e0c6f483d5cf-serving-cert\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.823184 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xls9v\" (UniqueName: \"kubernetes.io/projected/34e44c40-9633-4097-ba0d-e0c6f483d5cf-kube-api-access-xls9v\") pod \"route-controller-manager-577559c8c8-vkzb4\" (UID: \"34e44c40-9633-4097-ba0d-e0c6f483d5cf\") " pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.833602 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds7bj\" (UniqueName: \"kubernetes.io/projected/56239543-8abd-4c42-9d6b-8c430375a815-kube-api-access-ds7bj\") pod \"controller-manager-7d9b995f9d-xbjwz\" (UID: \"56239543-8abd-4c42-9d6b-8c430375a815\") " pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.977214 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:53 crc kubenswrapper[4932]: I0321 09:03:53.984873 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:54 crc kubenswrapper[4932]: I0321 09:03:54.451067 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4"] Mar 21 09:03:54 crc kubenswrapper[4932]: I0321 09:03:54.513552 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz"] Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.053980 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" event={"ID":"34e44c40-9633-4097-ba0d-e0c6f483d5cf","Type":"ContainerStarted","Data":"69b44a717f31c8534c08764a5177c15b7568a19aec37f330c24832842ef7c7b2"} Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.054362 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" event={"ID":"34e44c40-9633-4097-ba0d-e0c6f483d5cf","Type":"ContainerStarted","Data":"400afdcd2663903b7795a39b00f3398caaa5e55566f4dde9b77fffb44d1c762e"} Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.054389 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.055758 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" event={"ID":"56239543-8abd-4c42-9d6b-8c430375a815","Type":"ContainerStarted","Data":"0964ca4f44f214b3a7e81929fc8a4543f0bcd47e76e324727784c2941eea8591"} Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.055815 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" event={"ID":"56239543-8abd-4c42-9d6b-8c430375a815","Type":"ContainerStarted","Data":"0e41168bde2e0c75598674d8e2cb99a135badd5dcbeda0bb648f545e81b7590a"} Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.056818 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.112883 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.124981 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" podStartSLOduration=3.124966014 podStartE2EDuration="3.124966014s" podCreationTimestamp="2026-03-21 09:03:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:55.087391336 +0000 UTC m=+338.682589605" watchObservedRunningTime="2026-03-21 09:03:55.124966014 +0000 UTC m=+338.720164283" Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.169328 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d9b995f9d-xbjwz" podStartSLOduration=4.1693112039999995 podStartE2EDuration="4.169311204s" podCreationTimestamp="2026-03-21 09:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:03:55.126505882 +0000 UTC m=+338.721704171" watchObservedRunningTime="2026-03-21 09:03:55.169311204 +0000 UTC m=+338.764509473" Mar 21 09:03:55 crc kubenswrapper[4932]: I0321 09:03:55.238725 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-577559c8c8-vkzb4" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.442447 4932 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.443900 4932 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444169 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444231 4932 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444367 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444378 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444387 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444393 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444400 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444406 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444415 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444421 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444428 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444434 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444444 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444450 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444458 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444464 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444475 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444483 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444496 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444504 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.444514 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444520 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444609 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444616 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444623 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444631 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444637 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444645 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444653 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444834 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.444846 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.446034 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd" gracePeriod=15 Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.446070 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe" gracePeriod=15 Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.446048 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd" gracePeriod=15 Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.446211 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79" gracePeriod=15 Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.446321 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218" gracePeriod=15 Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.452768 4932 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.489977 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.568113 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.568173 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.568206 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.568238 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.568262 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.568284 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.568303 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.568320 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.669945 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670006 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670089 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670123 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670229 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670258 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670202 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670327 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670383 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.670392 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.705591 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:57 crc kubenswrapper[4932]: I0321 09:03:57.795749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:03:57 crc kubenswrapper[4932]: W0321 09:03:57.818427 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d53f9d6c7fef3c29ced169afe3c857093924719d493d4bd5b55550aec889da07 WatchSource:0}: Error finding container d53f9d6c7fef3c29ced169afe3c857093924719d493d4bd5b55550aec889da07: Status 404 returned error can't find the container with id d53f9d6c7fef3c29ced169afe3c857093924719d493d4bd5b55550aec889da07 Mar 21 09:03:57 crc kubenswrapper[4932]: E0321 09:03:57.821326 4932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ecfd9db0e3208 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 09:03:57.820547592 +0000 UTC m=+341.415745861,LastTimestamp:2026-03-21 09:03:57.820547592 +0000 UTC m=+341.415745861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.080624 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.082643 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.083799 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd" exitCode=0 Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.083835 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe" exitCode=0 Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.083845 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd" exitCode=0 Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.083854 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79" exitCode=2 Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.083910 4932 scope.go:117] "RemoveContainer" containerID="964dba0bd5d4419d7917f0adf3c5880a4381854f05e31524d703d38c4089baec" Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.085574 4932 generic.go:334] "Generic (PLEG): container finished" podID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" containerID="807084e677df5fd22fc5058976904f4bf732811886af0839816e150a04769f2e" exitCode=0 Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.085659 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"49804aeb-dfa6-4f73-82bf-4d8d29792d18","Type":"ContainerDied","Data":"807084e677df5fd22fc5058976904f4bf732811886af0839816e150a04769f2e"} Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.086420 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.086827 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:58 crc kubenswrapper[4932]: I0321 09:03:58.089615 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d53f9d6c7fef3c29ced169afe3c857093924719d493d4bd5b55550aec889da07"} Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.099670 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.101734 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f"} Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.102627 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.102940 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.475058 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.476174 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.476625 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.598336 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kube-api-access\") pod \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.598907 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-var-lock\") pod \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.598952 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kubelet-dir\") pod \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\" (UID: \"49804aeb-dfa6-4f73-82bf-4d8d29792d18\") " Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.599399 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-var-lock" (OuterVolumeSpecName: "var-lock") pod "49804aeb-dfa6-4f73-82bf-4d8d29792d18" (UID: "49804aeb-dfa6-4f73-82bf-4d8d29792d18"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.599461 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "49804aeb-dfa6-4f73-82bf-4d8d29792d18" (UID: "49804aeb-dfa6-4f73-82bf-4d8d29792d18"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.599515 4932 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-var-lock\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.615710 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "49804aeb-dfa6-4f73-82bf-4d8d29792d18" (UID: "49804aeb-dfa6-4f73-82bf-4d8d29792d18"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.700583 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.700622 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49804aeb-dfa6-4f73-82bf-4d8d29792d18-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.927170 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.928172 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.929000 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.929432 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:03:59 crc kubenswrapper[4932]: I0321 09:03:59.929788 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.004888 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.004975 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.005003 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.005047 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.005107 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.005200 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.005321 4932 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.005339 4932 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.005374 4932 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.113014 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.114082 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218" exitCode=0 Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.114180 4932 scope.go:117] "RemoveContainer" containerID="39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.114401 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.117772 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.117765 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"49804aeb-dfa6-4f73-82bf-4d8d29792d18","Type":"ContainerDied","Data":"69898ebacfe9b8f20ad954ad79609ab59a6183b31d682ec435cbf36f0ac1ecdb"} Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.117827 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69898ebacfe9b8f20ad954ad79609ab59a6183b31d682ec435cbf36f0ac1ecdb" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.122256 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.123243 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.123629 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.132465 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.132872 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.133332 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.138044 4932 scope.go:117] "RemoveContainer" containerID="835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.156490 4932 scope.go:117] "RemoveContainer" containerID="6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.173553 4932 scope.go:117] "RemoveContainer" containerID="67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.193118 4932 scope.go:117] "RemoveContainer" containerID="fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.214415 4932 scope.go:117] "RemoveContainer" containerID="45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.244228 4932 scope.go:117] "RemoveContainer" containerID="39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd" Mar 21 09:04:00 crc kubenswrapper[4932]: E0321 09:04:00.245017 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\": container with ID starting with 39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd not found: ID does not exist" containerID="39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.245052 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd"} err="failed to get container status \"39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\": rpc error: code = NotFound desc = could not find container \"39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd\": container with ID starting with 39166ff0dc3722591092e0b3736b838e8cd001575b894f91e73924cf2daa64dd not found: ID does not exist" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.245076 4932 scope.go:117] "RemoveContainer" containerID="835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe" Mar 21 09:04:00 crc kubenswrapper[4932]: E0321 09:04:00.245490 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\": container with ID starting with 835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe not found: ID does not exist" containerID="835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.245513 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe"} err="failed to get container status \"835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\": rpc error: code = NotFound desc = could not find container \"835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe\": container with ID starting with 835830d4ce2514f988d5fb1113bf499cf2f5fda0e70e606af255ca8d4ecf40fe not found: ID does not exist" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.245526 4932 scope.go:117] "RemoveContainer" containerID="6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd" Mar 21 09:04:00 crc kubenswrapper[4932]: E0321 09:04:00.245854 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\": container with ID starting with 6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd not found: ID does not exist" containerID="6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.245927 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd"} err="failed to get container status \"6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\": rpc error: code = NotFound desc = could not find container \"6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd\": container with ID starting with 6fc65a21f410078f40bde08541c293b4f8842bce4ad3a7ea3431cca5e01ccbcd not found: ID does not exist" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.245983 4932 scope.go:117] "RemoveContainer" containerID="67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79" Mar 21 09:04:00 crc kubenswrapper[4932]: E0321 09:04:00.246495 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\": container with ID starting with 67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79 not found: ID does not exist" containerID="67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.246571 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79"} err="failed to get container status \"67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\": rpc error: code = NotFound desc = could not find container \"67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79\": container with ID starting with 67feeca5872dae95bd253adb3b2d52dc3da2a3c1cc4577c889f19aa66ddb1b79 not found: ID does not exist" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.246610 4932 scope.go:117] "RemoveContainer" containerID="fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218" Mar 21 09:04:00 crc kubenswrapper[4932]: E0321 09:04:00.247052 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\": container with ID starting with fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218 not found: ID does not exist" containerID="fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.247082 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218"} err="failed to get container status \"fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\": rpc error: code = NotFound desc = could not find container \"fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218\": container with ID starting with fdd7b178171c2f8a753d728bedd66c56459d0e829741c9493ba9938537dea218 not found: ID does not exist" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.247102 4932 scope.go:117] "RemoveContainer" containerID="45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39" Mar 21 09:04:00 crc kubenswrapper[4932]: E0321 09:04:00.247488 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\": container with ID starting with 45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39 not found: ID does not exist" containerID="45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39" Mar 21 09:04:00 crc kubenswrapper[4932]: I0321 09:04:00.247532 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39"} err="failed to get container status \"45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\": rpc error: code = NotFound desc = could not find container \"45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39\": container with ID starting with 45bfe967d4bbf5194930294ffa9d4ca9f2e860f429a312a91cbdd0b92906bb39 not found: ID does not exist" Mar 21 09:04:01 crc kubenswrapper[4932]: I0321 09:04:01.709728 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 21 09:04:03 crc kubenswrapper[4932]: E0321 09:04:03.894486 4932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ecfd9db0e3208 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 09:03:57.820547592 +0000 UTC m=+341.415745861,LastTimestamp:2026-03-21 09:03:57.820547592 +0000 UTC m=+341.415745861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 09:04:04 crc kubenswrapper[4932]: E0321 09:04:04.854823 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:04 crc kubenswrapper[4932]: E0321 09:04:04.855328 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:04 crc kubenswrapper[4932]: E0321 09:04:04.855770 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:04 crc kubenswrapper[4932]: E0321 09:04:04.856012 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:04 crc kubenswrapper[4932]: E0321 09:04:04.856223 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:04 crc kubenswrapper[4932]: I0321 09:04:04.856256 4932 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 21 09:04:04 crc kubenswrapper[4932]: E0321 09:04:04.856460 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="200ms" Mar 21 09:04:05 crc kubenswrapper[4932]: E0321 09:04:05.058067 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="400ms" Mar 21 09:04:05 crc kubenswrapper[4932]: E0321 09:04:05.459963 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="800ms" Mar 21 09:04:06 crc kubenswrapper[4932]: E0321 09:04:06.260929 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="1.6s" Mar 21 09:04:07 crc kubenswrapper[4932]: I0321 09:04:07.706945 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:07 crc kubenswrapper[4932]: I0321 09:04:07.707492 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:07 crc kubenswrapper[4932]: E0321 09:04:07.862667 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="3.2s" Mar 21 09:04:08 crc kubenswrapper[4932]: I0321 09:04:08.701726 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:08 crc kubenswrapper[4932]: I0321 09:04:08.703184 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:08 crc kubenswrapper[4932]: I0321 09:04:08.703830 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:08 crc kubenswrapper[4932]: I0321 09:04:08.722913 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:08 crc kubenswrapper[4932]: I0321 09:04:08.722955 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:08 crc kubenswrapper[4932]: E0321 09:04:08.723577 4932 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:08 crc kubenswrapper[4932]: I0321 09:04:08.724408 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:09 crc kubenswrapper[4932]: I0321 09:04:09.210737 4932 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="45bac032224b925c603c8a4e89e3ed7383792018955cdf9b511cec7331b66554" exitCode=0 Mar 21 09:04:09 crc kubenswrapper[4932]: I0321 09:04:09.210828 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"45bac032224b925c603c8a4e89e3ed7383792018955cdf9b511cec7331b66554"} Mar 21 09:04:09 crc kubenswrapper[4932]: I0321 09:04:09.211240 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6965833e900309c3852d2b53bff48fe2e54c9067353529a43d84df99a917b8b4"} Mar 21 09:04:09 crc kubenswrapper[4932]: I0321 09:04:09.211709 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:09 crc kubenswrapper[4932]: I0321 09:04:09.211734 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:09 crc kubenswrapper[4932]: I0321 09:04:09.212020 4932 status_manager.go:851] "Failed to get status for pod" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:09 crc kubenswrapper[4932]: E0321 09:04:09.212307 4932 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:09 crc kubenswrapper[4932]: I0321 09:04:09.212306 4932 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Mar 21 09:04:09 crc kubenswrapper[4932]: I0321 09:04:09.777876 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" podUID="a1dbcfef-3656-4dac-82a6-78cc48d655df" containerName="oauth-openshift" containerID="cri-o://519d220e44ad0da68d36e265f1a9e01040eee944095af69b2f62938d5e833d78" gracePeriod=15 Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.221899 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"24fda4a92c08fe3a90f901222813a3e8895947b4007f831ca7083fca8270939a"} Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.222430 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"51c5ac98b02976d8f85ba60f05ddff0f614d684914d9a6fc1cbca34485bb072d"} Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.222443 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8f33600b24ed9d3e4fb17e1a52077506b6c406ba8a180df02081564e79451119"} Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.223968 4932 generic.go:334] "Generic (PLEG): container finished" podID="a1dbcfef-3656-4dac-82a6-78cc48d655df" containerID="519d220e44ad0da68d36e265f1a9e01040eee944095af69b2f62938d5e833d78" exitCode=0 Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.224025 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" event={"ID":"a1dbcfef-3656-4dac-82a6-78cc48d655df","Type":"ContainerDied","Data":"519d220e44ad0da68d36e265f1a9e01040eee944095af69b2f62938d5e833d78"} Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.298451 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.361904 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-error\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.361967 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-router-certs\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.361998 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-policies\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362029 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-login\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362055 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-trusted-ca-bundle\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-ocp-branding-template\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362096 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-session\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362116 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-serving-cert\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362133 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrp5m\" (UniqueName: \"kubernetes.io/projected/a1dbcfef-3656-4dac-82a6-78cc48d655df-kube-api-access-hrp5m\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362158 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-cliconfig\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362186 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-service-ca\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362210 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-provider-selection\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362247 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-idp-0-file-data\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362280 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-dir\") pod \"a1dbcfef-3656-4dac-82a6-78cc48d655df\" (UID: \"a1dbcfef-3656-4dac-82a6-78cc48d655df\") " Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.362554 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.363606 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.363705 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.364069 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.366156 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.369716 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.369834 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.370417 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.371363 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.374854 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.374886 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1dbcfef-3656-4dac-82a6-78cc48d655df-kube-api-access-hrp5m" (OuterVolumeSpecName: "kube-api-access-hrp5m") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "kube-api-access-hrp5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.375523 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.376490 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.377875 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a1dbcfef-3656-4dac-82a6-78cc48d655df" (UID: "a1dbcfef-3656-4dac-82a6-78cc48d655df"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.463811 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464108 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464206 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464314 4932 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464444 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464536 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464625 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464689 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464758 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464817 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464875 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464932 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.464988 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrp5m\" (UniqueName: \"kubernetes.io/projected/a1dbcfef-3656-4dac-82a6-78cc48d655df-kube-api-access-hrp5m\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:10 crc kubenswrapper[4932]: I0321 09:04:10.465047 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1dbcfef-3656-4dac-82a6-78cc48d655df-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.240899 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5510202db5e65f2620ae6bfae0f63fcb467ffae529c835a4e6c47fff58b58e35"} Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.241167 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.241784 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.241813 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.241846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"338067b08b7c07302b8246bbad117bd6e3e8fe88cb331dae6bc8e1bc4b275296"} Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.244403 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.245164 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.245225 4932 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f" exitCode=1 Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.245310 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f"} Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.245984 4932 scope.go:117] "RemoveContainer" containerID="2d0b2b66a637069cb6ef2da5dd8fd41c226129ad0e7272874f19566970ae1a9f" Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.246934 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" event={"ID":"a1dbcfef-3656-4dac-82a6-78cc48d655df","Type":"ContainerDied","Data":"1a67ec2d33f39e3e69bf3d01c9fca72419f9ac014d7bf9ac6411120052cc68c6"} Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.246988 4932 scope.go:117] "RemoveContainer" containerID="519d220e44ad0da68d36e265f1a9e01040eee944095af69b2f62938d5e833d78" Mar 21 09:04:11 crc kubenswrapper[4932]: I0321 09:04:11.247138 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wt26g" Mar 21 09:04:12 crc kubenswrapper[4932]: I0321 09:04:12.256957 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 21 09:04:12 crc kubenswrapper[4932]: I0321 09:04:12.257525 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 21 09:04:12 crc kubenswrapper[4932]: I0321 09:04:12.257607 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f3f78e231622435e4ff4a0172a23c336ecf5831ec0222839d44004e707063e07"} Mar 21 09:04:13 crc kubenswrapper[4932]: I0321 09:04:13.724739 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:13 crc kubenswrapper[4932]: I0321 09:04:13.725420 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:13 crc kubenswrapper[4932]: I0321 09:04:13.752014 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:16 crc kubenswrapper[4932]: I0321 09:04:16.254652 4932 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:16 crc kubenswrapper[4932]: I0321 09:04:16.289051 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:16 crc kubenswrapper[4932]: I0321 09:04:16.289123 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:16 crc kubenswrapper[4932]: I0321 09:04:16.293075 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:17 crc kubenswrapper[4932]: I0321 09:04:17.300165 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:17 crc kubenswrapper[4932]: I0321 09:04:17.300664 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="15b82a80-f33d-47fe-8bab-21cd111dba94" Mar 21 09:04:17 crc kubenswrapper[4932]: I0321 09:04:17.726081 4932 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5405ad16-e9cd-40ee-bde5-14d3f0edf18b" Mar 21 09:04:18 crc kubenswrapper[4932]: I0321 09:04:18.159402 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 09:04:20 crc kubenswrapper[4932]: I0321 09:04:20.266522 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 09:04:20 crc kubenswrapper[4932]: I0321 09:04:20.271381 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 09:04:22 crc kubenswrapper[4932]: I0321 09:04:22.512993 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 21 09:04:22 crc kubenswrapper[4932]: I0321 09:04:22.831046 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.001621 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.138056 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.468940 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.472045 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.489901 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb0a5470-935a-4f5a-9a19-f261a853a79c-metrics-certs\") pod \"network-metrics-daemon-cpgnf\" (UID: \"fb0a5470-935a-4f5a-9a19-f261a853a79c\") " pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.499693 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.534974 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.543422 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cpgnf" Mar 21 09:04:23 crc kubenswrapper[4932]: I0321 09:04:23.754364 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 21 09:04:24 crc kubenswrapper[4932]: I0321 09:04:24.027337 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 09:04:24 crc kubenswrapper[4932]: I0321 09:04:24.041897 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 21 09:04:24 crc kubenswrapper[4932]: W0321 09:04:24.050116 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb0a5470_935a_4f5a_9a19_f261a853a79c.slice/crio-9625a0a2d495af7ea77717ab4d2e1ac0070da21890a97dd55af11ee3126b8749 WatchSource:0}: Error finding container 9625a0a2d495af7ea77717ab4d2e1ac0070da21890a97dd55af11ee3126b8749: Status 404 returned error can't find the container with id 9625a0a2d495af7ea77717ab4d2e1ac0070da21890a97dd55af11ee3126b8749 Mar 21 09:04:24 crc kubenswrapper[4932]: I0321 09:04:24.303182 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 21 09:04:24 crc kubenswrapper[4932]: I0321 09:04:24.346473 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" event={"ID":"fb0a5470-935a-4f5a-9a19-f261a853a79c","Type":"ContainerStarted","Data":"9625a0a2d495af7ea77717ab4d2e1ac0070da21890a97dd55af11ee3126b8749"} Mar 21 09:04:24 crc kubenswrapper[4932]: I0321 09:04:24.833769 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 21 09:04:24 crc kubenswrapper[4932]: I0321 09:04:24.916611 4932 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 21 09:04:25 crc kubenswrapper[4932]: I0321 09:04:25.356615 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" event={"ID":"fb0a5470-935a-4f5a-9a19-f261a853a79c","Type":"ContainerStarted","Data":"c2202fce16e703b319f2901fc8905413ca578508c204a7728b3c4f6517bb2ac8"} Mar 21 09:04:25 crc kubenswrapper[4932]: I0321 09:04:25.357543 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cpgnf" event={"ID":"fb0a5470-935a-4f5a-9a19-f261a853a79c","Type":"ContainerStarted","Data":"70b7c3e2d01ee8f4839f5dde715c0168de947c16d4d1b1ec421bca8c640e4322"} Mar 21 09:04:25 crc kubenswrapper[4932]: I0321 09:04:25.589822 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 21 09:04:26 crc kubenswrapper[4932]: I0321 09:04:26.091668 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 21 09:04:26 crc kubenswrapper[4932]: I0321 09:04:26.770542 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 21 09:04:26 crc kubenswrapper[4932]: I0321 09:04:26.903687 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 21 09:04:26 crc kubenswrapper[4932]: I0321 09:04:26.904722 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 21 09:04:27 crc kubenswrapper[4932]: I0321 09:04:27.222327 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 21 09:04:28 crc kubenswrapper[4932]: I0321 09:04:28.163970 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 09:04:28 crc kubenswrapper[4932]: I0321 09:04:28.172911 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 21 09:04:28 crc kubenswrapper[4932]: I0321 09:04:28.281222 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 09:04:28 crc kubenswrapper[4932]: I0321 09:04:28.448979 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 21 09:04:28 crc kubenswrapper[4932]: I0321 09:04:28.522428 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 21 09:04:28 crc kubenswrapper[4932]: I0321 09:04:28.694532 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 21 09:04:29 crc kubenswrapper[4932]: I0321 09:04:29.165935 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 21 09:04:29 crc kubenswrapper[4932]: I0321 09:04:29.723210 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 21 09:04:29 crc kubenswrapper[4932]: I0321 09:04:29.870404 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 21 09:04:29 crc kubenswrapper[4932]: I0321 09:04:29.904322 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 21 09:04:29 crc kubenswrapper[4932]: I0321 09:04:29.906762 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.026339 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.043553 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.051249 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.196380 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.315101 4932 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.318294 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=33.31827431 podStartE2EDuration="33.31827431s" podCreationTimestamp="2026-03-21 09:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:04:15.96047459 +0000 UTC m=+359.555672899" watchObservedRunningTime="2026-03-21 09:04:30.31827431 +0000 UTC m=+373.913472579" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.319738 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cpgnf" podStartSLOduration=316.319732556 podStartE2EDuration="5m16.319732556s" podCreationTimestamp="2026-03-21 08:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:04:25.378465493 +0000 UTC m=+368.973663782" watchObservedRunningTime="2026-03-21 09:04:30.319732556 +0000 UTC m=+373.914930825" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.320478 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-wt26g"] Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.320538 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.320557 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cpgnf"] Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.325082 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.341153 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.341122007 podStartE2EDuration="14.341122007s" podCreationTimestamp="2026-03-21 09:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:04:30.337379439 +0000 UTC m=+373.932577728" watchObservedRunningTime="2026-03-21 09:04:30.341122007 +0000 UTC m=+373.936320276" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.483726 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.630731 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.749339 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.761214 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 21 09:04:30 crc kubenswrapper[4932]: I0321 09:04:30.902010 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.102724 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.108778 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.191263 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.203179 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.226977 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.237369 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.261730 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.349286 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.349395 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.387303 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.421910 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.449944 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.475956 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.523768 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.665409 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.714559 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1dbcfef-3656-4dac-82a6-78cc48d655df" path="/var/lib/kubelet/pods/a1dbcfef-3656-4dac-82a6-78cc48d655df/volumes" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.800622 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.865945 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.887281 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.887532 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.919639 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 21 09:04:31 crc kubenswrapper[4932]: I0321 09:04:31.988518 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.090515 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.242289 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.244672 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.262412 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.432503 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.438843 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.462020 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.651209 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.733580 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.976578 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 21 09:04:32 crc kubenswrapper[4932]: I0321 09:04:32.999568 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.147151 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.211976 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.243843 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.244087 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.307543 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.358935 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.412044 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.423535 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.469888 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.488984 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.530061 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.555189 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.589331 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.648440 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.698026 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.894963 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 21 09:04:33 crc kubenswrapper[4932]: I0321 09:04:33.920174 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.050418 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.054949 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.095211 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.149528 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.187729 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.314140 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.354369 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.508334 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.559282 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.617627 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.649220 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.674540 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.722118 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.755918 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.789248 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.827814 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 21 09:04:34 crc kubenswrapper[4932]: I0321 09:04:34.895851 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.069314 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.088077 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.115092 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.116673 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.133504 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.166433 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.250615 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.315477 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.425221 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.517750 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.572478 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.574633 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.644518 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.658515 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.699880 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.707752 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.771936 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.783221 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.822660 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.922531 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.977434 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 21 09:04:35 crc kubenswrapper[4932]: I0321 09:04:35.981435 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.014519 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.107337 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.290043 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.320866 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.489258 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.630195 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.821220 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.871110 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.966616 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 21 09:04:36 crc kubenswrapper[4932]: I0321 09:04:36.976965 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.040889 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.086235 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.171051 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.176288 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.183945 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.195669 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.209336 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.279020 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.358152 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.501407 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.502589 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.504923 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.523091 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.583154 4932 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.603284 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.633673 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.667912 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.671408 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.680197 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.794757 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.811574 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.853902 4932 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.896373 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.900536 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.901282 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.972130 4932 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 21 09:04:37 crc kubenswrapper[4932]: I0321 09:04:37.979798 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.011265 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.022214 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.089189 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.220843 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.241907 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.320150 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.348486 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.461331 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.503279 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.517179 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.529608 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.574312 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.632775 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.633876 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.662227 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.699865 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.742185 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.766795 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.774710 4932 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.775117 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f" gracePeriod=5 Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.823459 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.842715 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 21 09:04:38 crc kubenswrapper[4932]: I0321 09:04:38.869541 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.230930 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.233618 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.260801 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.275519 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.372368 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.528786 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.531115 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.577695 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.587133 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.610943 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.728569 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 21 09:04:39 crc kubenswrapper[4932]: I0321 09:04:39.847774 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.005007 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.021466 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.168266 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.251517 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.330407 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.407985 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.506776 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.646479 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.833108 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.874321 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 21 09:04:40 crc kubenswrapper[4932]: I0321 09:04:40.925282 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.006509 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.049478 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.121061 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.136453 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.152834 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.246131 4932 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.504479 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.505780 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.506041 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.512301 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.621697 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.763166 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.958048 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.993272 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568064-58rl6"] Mar 21 09:04:41 crc kubenswrapper[4932]: E0321 09:04:41.993907 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.993952 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 21 09:04:41 crc kubenswrapper[4932]: E0321 09:04:41.993989 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" containerName="installer" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.994007 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" containerName="installer" Mar 21 09:04:41 crc kubenswrapper[4932]: E0321 09:04:41.994049 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1dbcfef-3656-4dac-82a6-78cc48d655df" containerName="oauth-openshift" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.994070 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1dbcfef-3656-4dac-82a6-78cc48d655df" containerName="oauth-openshift" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.994536 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="49804aeb-dfa6-4f73-82bf-4d8d29792d18" containerName="installer" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.994568 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 21 09:04:41 crc kubenswrapper[4932]: I0321 09:04:41.994594 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1dbcfef-3656-4dac-82a6-78cc48d655df" containerName="oauth-openshift" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:41.995870 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568064-58rl6" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:41.999215 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-849dbf65f-8g2gm"] Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:41.999555 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.000400 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.003705 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.004620 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.004645 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.004849 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.007694 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568064-58rl6"] Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.013396 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.013608 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.013816 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.014063 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.014315 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.014804 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.014968 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.015429 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.015702 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.015723 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.019910 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s69v\" (UniqueName: \"kubernetes.io/projected/3a1438e2-78ca-424b-968d-3a749eac42ec-kube-api-access-2s69v\") pod \"auto-csr-approver-29568064-58rl6\" (UID: \"3a1438e2-78ca-424b-968d-3a749eac42ec\") " pod="openshift-infra/auto-csr-approver-29568064-58rl6" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.020227 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.022971 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.028075 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.036296 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-849dbf65f-8g2gm"] Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.042736 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121190 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121259 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-login\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121303 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6skl\" (UniqueName: \"kubernetes.io/projected/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-kube-api-access-l6skl\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121341 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-audit-dir\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121406 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-error\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121475 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-session\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121512 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-audit-policies\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121559 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-cliconfig\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121594 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-serving-cert\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121636 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-router-certs\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121686 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121726 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-service-ca\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121765 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121838 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s69v\" (UniqueName: \"kubernetes.io/projected/3a1438e2-78ca-424b-968d-3a749eac42ec-kube-api-access-2s69v\") pod \"auto-csr-approver-29568064-58rl6\" (UID: \"3a1438e2-78ca-424b-968d-3a749eac42ec\") " pod="openshift-infra/auto-csr-approver-29568064-58rl6" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.121877 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.138292 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.147125 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s69v\" (UniqueName: \"kubernetes.io/projected/3a1438e2-78ca-424b-968d-3a749eac42ec-kube-api-access-2s69v\") pod \"auto-csr-approver-29568064-58rl6\" (UID: \"3a1438e2-78ca-424b-968d-3a749eac42ec\") " pod="openshift-infra/auto-csr-approver-29568064-58rl6" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223312 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-service-ca\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223390 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223415 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223460 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223486 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223506 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-login\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6skl\" (UniqueName: \"kubernetes.io/projected/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-kube-api-access-l6skl\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223561 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-audit-dir\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-error\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223611 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-session\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223632 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-audit-policies\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223655 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-cliconfig\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223675 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-serving-cert\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.223732 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-router-certs\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.224486 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-service-ca\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.224648 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-audit-dir\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.225337 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-cliconfig\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.226072 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-audit-policies\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.226105 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.228754 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.229116 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-session\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.229235 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-router-certs\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.229426 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.229446 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-error\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.229660 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-system-serving-cert\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.232314 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-template-login\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.232430 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.256083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6skl\" (UniqueName: \"kubernetes.io/projected/9e80d719-b3e6-4c76-aed0-d7d7e4de4045-kube-api-access-l6skl\") pod \"oauth-openshift-849dbf65f-8g2gm\" (UID: \"9e80d719-b3e6-4c76-aed0-d7d7e4de4045\") " pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.264010 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.275288 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.340828 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568064-58rl6" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.353487 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.366234 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.410899 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.751259 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-849dbf65f-8g2gm"] Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.788160 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.808613 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568064-58rl6"] Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.810737 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 21 09:04:42 crc kubenswrapper[4932]: W0321 09:04:42.814818 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a1438e2_78ca_424b_968d_3a749eac42ec.slice/crio-2eaf83f43cbf79d7a7588137dd2e60d78aa50de4938e8751d4ba0dd317b330a1 WatchSource:0}: Error finding container 2eaf83f43cbf79d7a7588137dd2e60d78aa50de4938e8751d4ba0dd317b330a1: Status 404 returned error can't find the container with id 2eaf83f43cbf79d7a7588137dd2e60d78aa50de4938e8751d4ba0dd317b330a1 Mar 21 09:04:42 crc kubenswrapper[4932]: I0321 09:04:42.966778 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.079669 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.417602 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.471644 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" event={"ID":"9e80d719-b3e6-4c76-aed0-d7d7e4de4045","Type":"ContainerStarted","Data":"76b8f47a21b98d2dd7028667f1ba80ab9f214b44ab320265ee5bd0d4257ea083"} Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.471742 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" event={"ID":"9e80d719-b3e6-4c76-aed0-d7d7e4de4045","Type":"ContainerStarted","Data":"1f71bba27ddca4447b647c9d7f2892ff05ebbef79a7380d906e71f663717e860"} Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.472244 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.477848 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568064-58rl6" event={"ID":"3a1438e2-78ca-424b-968d-3a749eac42ec","Type":"ContainerStarted","Data":"2eaf83f43cbf79d7a7588137dd2e60d78aa50de4938e8751d4ba0dd317b330a1"} Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.508760 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" podStartSLOduration=59.508735249 podStartE2EDuration="59.508735249s" podCreationTimestamp="2026-03-21 09:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:04:43.495296958 +0000 UTC m=+387.090495247" watchObservedRunningTime="2026-03-21 09:04:43.508735249 +0000 UTC m=+387.103933538" Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.583742 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-849dbf65f-8g2gm" Mar 21 09:04:43 crc kubenswrapper[4932]: I0321 09:04:43.719034 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.444229 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.444661 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.488817 4932 generic.go:334] "Generic (PLEG): container finished" podID="3a1438e2-78ca-424b-968d-3a749eac42ec" containerID="593d6b66dde78bbbe1337a6374efe4a9cade7527802ccbaf27ce2d7f4051bc2a" exitCode=0 Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.488915 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568064-58rl6" event={"ID":"3a1438e2-78ca-424b-968d-3a749eac42ec","Type":"ContainerDied","Data":"593d6b66dde78bbbe1337a6374efe4a9cade7527802ccbaf27ce2d7f4051bc2a"} Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.492436 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.492549 4932 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f" exitCode=137 Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.492650 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.492695 4932 scope.go:117] "RemoveContainer" containerID="d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.513603 4932 scope.go:117] "RemoveContainer" containerID="d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f" Mar 21 09:04:44 crc kubenswrapper[4932]: E0321 09:04:44.514249 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f\": container with ID starting with d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f not found: ID does not exist" containerID="d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.514327 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f"} err="failed to get container status \"d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f\": rpc error: code = NotFound desc = could not find container \"d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f\": container with ID starting with d35c914490fedee132d051414ff9ddeab24bf6855678555afb26142ce17ee88f not found: ID does not exist" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.548914 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566264 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566319 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566430 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566471 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566462 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566513 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566531 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566602 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.566634 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.568497 4932 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.568561 4932 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.568582 4932 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.568602 4932 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.575228 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.670233 4932 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.680231 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.866507 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 21 09:04:44 crc kubenswrapper[4932]: I0321 09:04:44.993553 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.711557 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.712379 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.724696 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.724741 4932 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bea5a18c-d812-46c0-8741-e4adb70d10b4" Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.729926 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.730010 4932 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bea5a18c-d812-46c0-8741-e4adb70d10b4" Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.794963 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568064-58rl6" Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.885047 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s69v\" (UniqueName: \"kubernetes.io/projected/3a1438e2-78ca-424b-968d-3a749eac42ec-kube-api-access-2s69v\") pod \"3a1438e2-78ca-424b-968d-3a749eac42ec\" (UID: \"3a1438e2-78ca-424b-968d-3a749eac42ec\") " Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.890976 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1438e2-78ca-424b-968d-3a749eac42ec-kube-api-access-2s69v" (OuterVolumeSpecName: "kube-api-access-2s69v") pod "3a1438e2-78ca-424b-968d-3a749eac42ec" (UID: "3a1438e2-78ca-424b-968d-3a749eac42ec"). InnerVolumeSpecName "kube-api-access-2s69v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:04:45 crc kubenswrapper[4932]: I0321 09:04:45.987030 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s69v\" (UniqueName: \"kubernetes.io/projected/3a1438e2-78ca-424b-968d-3a749eac42ec-kube-api-access-2s69v\") on node \"crc\" DevicePath \"\"" Mar 21 09:04:46 crc kubenswrapper[4932]: I0321 09:04:46.136657 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 21 09:04:46 crc kubenswrapper[4932]: I0321 09:04:46.511813 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568064-58rl6" event={"ID":"3a1438e2-78ca-424b-968d-3a749eac42ec","Type":"ContainerDied","Data":"2eaf83f43cbf79d7a7588137dd2e60d78aa50de4938e8751d4ba0dd317b330a1"} Mar 21 09:04:46 crc kubenswrapper[4932]: I0321 09:04:46.511873 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568064-58rl6" Mar 21 09:04:46 crc kubenswrapper[4932]: I0321 09:04:46.511908 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eaf83f43cbf79d7a7588137dd2e60d78aa50de4938e8751d4ba0dd317b330a1" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.354221 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gx64c"] Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.355197 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gx64c" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="registry-server" containerID="cri-o://9b253d0c2573dd28f7279c66daa519534de24d3614e3b41c9dab8b4b58917f71" gracePeriod=30 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.382415 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbmnj"] Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.382861 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jbmnj" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerName="registry-server" containerID="cri-o://c22a40d8bb3fce3526514843e6c1b1f761ab4d70c763e9fe7ccd1de5b8126e23" gracePeriod=30 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.408154 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wpxg"] Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.408480 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" podUID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" containerName="marketplace-operator" containerID="cri-o://7d903091095909d0e0f8ac4008c8097f42862b4198451d6f17751f0932e04682" gracePeriod=30 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.420449 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2kwc"] Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.420796 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m2kwc" podUID="29534428-e319-412a-a850-53b180783073" containerName="registry-server" containerID="cri-o://673c2468511686b34a4cad7e9bbcd4ed5eab37d1964319c6f34ffbe3d8ff8847" gracePeriod=30 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.429910 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ttjm"] Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.430240 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ttjm" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerName="registry-server" containerID="cri-o://95e1b1e8ce0b1b08b5e0cceb6e864d34bca566da63baf46f9969ce8663e67bec" gracePeriod=30 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.439591 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2nnj"] Mar 21 09:05:00 crc kubenswrapper[4932]: E0321 09:05:00.439947 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1438e2-78ca-424b-968d-3a749eac42ec" containerName="oc" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.439966 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1438e2-78ca-424b-968d-3a749eac42ec" containerName="oc" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.440085 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1438e2-78ca-424b-968d-3a749eac42ec" containerName="oc" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.440675 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.444325 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2nnj"] Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.495189 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a43728c7-245f-4a2e-8182-613692389bac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.495278 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwsrg\" (UniqueName: \"kubernetes.io/projected/a43728c7-245f-4a2e-8182-613692389bac-kube-api-access-dwsrg\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.495389 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a43728c7-245f-4a2e-8182-613692389bac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.596133 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a43728c7-245f-4a2e-8182-613692389bac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.596199 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwsrg\" (UniqueName: \"kubernetes.io/projected/a43728c7-245f-4a2e-8182-613692389bac-kube-api-access-dwsrg\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.596240 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a43728c7-245f-4a2e-8182-613692389bac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.599137 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a43728c7-245f-4a2e-8182-613692389bac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.613431 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a43728c7-245f-4a2e-8182-613692389bac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.620437 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwsrg\" (UniqueName: \"kubernetes.io/projected/a43728c7-245f-4a2e-8182-613692389bac-kube-api-access-dwsrg\") pod \"marketplace-operator-79b997595-x2nnj\" (UID: \"a43728c7-245f-4a2e-8182-613692389bac\") " pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.621823 4932 generic.go:334] "Generic (PLEG): container finished" podID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerID="95e1b1e8ce0b1b08b5e0cceb6e864d34bca566da63baf46f9969ce8663e67bec" exitCode=0 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.621938 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ttjm" event={"ID":"35a4e0fc-1b46-40ea-8c9f-f284960024e6","Type":"ContainerDied","Data":"95e1b1e8ce0b1b08b5e0cceb6e864d34bca566da63baf46f9969ce8663e67bec"} Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.627003 4932 generic.go:334] "Generic (PLEG): container finished" podID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerID="9b253d0c2573dd28f7279c66daa519534de24d3614e3b41c9dab8b4b58917f71" exitCode=0 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.627081 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx64c" event={"ID":"4b4f0982-25bd-4a81-b00f-7b35377a893a","Type":"ContainerDied","Data":"9b253d0c2573dd28f7279c66daa519534de24d3614e3b41c9dab8b4b58917f71"} Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.631574 4932 generic.go:334] "Generic (PLEG): container finished" podID="29534428-e319-412a-a850-53b180783073" containerID="673c2468511686b34a4cad7e9bbcd4ed5eab37d1964319c6f34ffbe3d8ff8847" exitCode=0 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.631644 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2kwc" event={"ID":"29534428-e319-412a-a850-53b180783073","Type":"ContainerDied","Data":"673c2468511686b34a4cad7e9bbcd4ed5eab37d1964319c6f34ffbe3d8ff8847"} Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.635519 4932 generic.go:334] "Generic (PLEG): container finished" podID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" containerID="7d903091095909d0e0f8ac4008c8097f42862b4198451d6f17751f0932e04682" exitCode=0 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.635595 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" event={"ID":"eb4e7142-4148-4ebc-864d-7f7c6cfbf237","Type":"ContainerDied","Data":"7d903091095909d0e0f8ac4008c8097f42862b4198451d6f17751f0932e04682"} Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.639883 4932 generic.go:334] "Generic (PLEG): container finished" podID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerID="c22a40d8bb3fce3526514843e6c1b1f761ab4d70c763e9fe7ccd1de5b8126e23" exitCode=0 Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.639951 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbmnj" event={"ID":"bbc42726-34c5-4cd6-b2b4-5e27a325adbd","Type":"ContainerDied","Data":"c22a40d8bb3fce3526514843e6c1b1f761ab4d70c763e9fe7ccd1de5b8126e23"} Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.851109 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.855371 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.861401 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.909507 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-catalog-content\") pod \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.910432 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-utilities\") pod \"4b4f0982-25bd-4a81-b00f-7b35377a893a\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.910476 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-utilities\") pod \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.910567 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8hhw\" (UniqueName: \"kubernetes.io/projected/4b4f0982-25bd-4a81-b00f-7b35377a893a-kube-api-access-x8hhw\") pod \"4b4f0982-25bd-4a81-b00f-7b35377a893a\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.910600 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-catalog-content\") pod \"4b4f0982-25bd-4a81-b00f-7b35377a893a\" (UID: \"4b4f0982-25bd-4a81-b00f-7b35377a893a\") " Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.910634 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz6dt\" (UniqueName: \"kubernetes.io/projected/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-kube-api-access-fz6dt\") pod \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\" (UID: \"bbc42726-34c5-4cd6-b2b4-5e27a325adbd\") " Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.911335 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-utilities" (OuterVolumeSpecName: "utilities") pod "bbc42726-34c5-4cd6-b2b4-5e27a325adbd" (UID: "bbc42726-34c5-4cd6-b2b4-5e27a325adbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.911482 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-utilities" (OuterVolumeSpecName: "utilities") pod "4b4f0982-25bd-4a81-b00f-7b35377a893a" (UID: "4b4f0982-25bd-4a81-b00f-7b35377a893a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.917090 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-kube-api-access-fz6dt" (OuterVolumeSpecName: "kube-api-access-fz6dt") pod "bbc42726-34c5-4cd6-b2b4-5e27a325adbd" (UID: "bbc42726-34c5-4cd6-b2b4-5e27a325adbd"). InnerVolumeSpecName "kube-api-access-fz6dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:05:00 crc kubenswrapper[4932]: I0321 09:05:00.917141 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b4f0982-25bd-4a81-b00f-7b35377a893a-kube-api-access-x8hhw" (OuterVolumeSpecName: "kube-api-access-x8hhw") pod "4b4f0982-25bd-4a81-b00f-7b35377a893a" (UID: "4b4f0982-25bd-4a81-b00f-7b35377a893a"). InnerVolumeSpecName "kube-api-access-x8hhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.007863 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b4f0982-25bd-4a81-b00f-7b35377a893a" (UID: "4b4f0982-25bd-4a81-b00f-7b35377a893a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.012876 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8hhw\" (UniqueName: \"kubernetes.io/projected/4b4f0982-25bd-4a81-b00f-7b35377a893a-kube-api-access-x8hhw\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.012901 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.012915 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz6dt\" (UniqueName: \"kubernetes.io/projected/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-kube-api-access-fz6dt\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.012927 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4f0982-25bd-4a81-b00f-7b35377a893a-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.013003 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.026978 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.033619 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.037842 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.045157 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbc42726-34c5-4cd6-b2b4-5e27a325adbd" (UID: "bbc42726-34c5-4cd6-b2b4-5e27a325adbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.114628 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbxjh\" (UniqueName: \"kubernetes.io/projected/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-kube-api-access-rbxjh\") pod \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.114694 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-utilities\") pod \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.114730 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-utilities\") pod \"29534428-e319-412a-a850-53b180783073\" (UID: \"29534428-e319-412a-a850-53b180783073\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.114759 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-operator-metrics\") pod \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.115632 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-utilities" (OuterVolumeSpecName: "utilities") pod "35a4e0fc-1b46-40ea-8c9f-f284960024e6" (UID: "35a4e0fc-1b46-40ea-8c9f-f284960024e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.118025 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-utilities" (OuterVolumeSpecName: "utilities") pod "29534428-e319-412a-a850-53b180783073" (UID: "29534428-e319-412a-a850-53b180783073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.118239 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-catalog-content\") pod \"29534428-e319-412a-a850-53b180783073\" (UID: \"29534428-e319-412a-a850-53b180783073\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.120167 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "eb4e7142-4148-4ebc-864d-7f7c6cfbf237" (UID: "eb4e7142-4148-4ebc-864d-7f7c6cfbf237"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.121001 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29534428-e319-412a-a850-53b180783073-kube-api-access-l6g8c" (OuterVolumeSpecName: "kube-api-access-l6g8c") pod "29534428-e319-412a-a850-53b180783073" (UID: "29534428-e319-412a-a850-53b180783073"). InnerVolumeSpecName "kube-api-access-l6g8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.124960 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-kube-api-access-rbxjh" (OuterVolumeSpecName: "kube-api-access-rbxjh") pod "eb4e7142-4148-4ebc-864d-7f7c6cfbf237" (UID: "eb4e7142-4148-4ebc-864d-7f7c6cfbf237"). InnerVolumeSpecName "kube-api-access-rbxjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.118274 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6g8c\" (UniqueName: \"kubernetes.io/projected/29534428-e319-412a-a850-53b180783073-kube-api-access-l6g8c\") pod \"29534428-e319-412a-a850-53b180783073\" (UID: \"29534428-e319-412a-a850-53b180783073\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.128532 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-catalog-content\") pod \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.128594 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-trusted-ca\") pod \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\" (UID: \"eb4e7142-4148-4ebc-864d-7f7c6cfbf237\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.128637 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc9lc\" (UniqueName: \"kubernetes.io/projected/35a4e0fc-1b46-40ea-8c9f-f284960024e6-kube-api-access-dc9lc\") pod \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\" (UID: \"35a4e0fc-1b46-40ea-8c9f-f284960024e6\") " Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.129874 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6g8c\" (UniqueName: \"kubernetes.io/projected/29534428-e319-412a-a850-53b180783073-kube-api-access-l6g8c\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.129913 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbc42726-34c5-4cd6-b2b4-5e27a325adbd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.129923 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbxjh\" (UniqueName: \"kubernetes.io/projected/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-kube-api-access-rbxjh\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.129935 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.129946 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.129956 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.131466 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "eb4e7142-4148-4ebc-864d-7f7c6cfbf237" (UID: "eb4e7142-4148-4ebc-864d-7f7c6cfbf237"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.135597 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a4e0fc-1b46-40ea-8c9f-f284960024e6-kube-api-access-dc9lc" (OuterVolumeSpecName: "kube-api-access-dc9lc") pod "35a4e0fc-1b46-40ea-8c9f-f284960024e6" (UID: "35a4e0fc-1b46-40ea-8c9f-f284960024e6"). InnerVolumeSpecName "kube-api-access-dc9lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.165726 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29534428-e319-412a-a850-53b180783073" (UID: "29534428-e319-412a-a850-53b180783073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.204726 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x2nnj"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.231232 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29534428-e319-412a-a850-53b180783073-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.231284 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb4e7142-4148-4ebc-864d-7f7c6cfbf237-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.231303 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc9lc\" (UniqueName: \"kubernetes.io/projected/35a4e0fc-1b46-40ea-8c9f-f284960024e6-kube-api-access-dc9lc\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.303216 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35a4e0fc-1b46-40ea-8c9f-f284960024e6" (UID: "35a4e0fc-1b46-40ea-8c9f-f284960024e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.332517 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35a4e0fc-1b46-40ea-8c9f-f284960024e6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.649047 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2kwc" event={"ID":"29534428-e319-412a-a850-53b180783073","Type":"ContainerDied","Data":"bcd3217a2afd96a8c27fac9533c60ed930c650344de0b1e5c3eac14526124ee3"} Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.649126 4932 scope.go:117] "RemoveContainer" containerID="673c2468511686b34a4cad7e9bbcd4ed5eab37d1964319c6f34ffbe3d8ff8847" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.649072 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2kwc" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.651206 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" event={"ID":"eb4e7142-4148-4ebc-864d-7f7c6cfbf237","Type":"ContainerDied","Data":"3e2c80929feeaba227c32bd462f489eac8696132b1fed640ebdabe6c07de2ffe"} Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.651325 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6wpxg" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.654235 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" event={"ID":"a43728c7-245f-4a2e-8182-613692389bac","Type":"ContainerStarted","Data":"2afd4c01c86c71d5d0abcc3998493512100798180b111fbfdcabe828fce30249"} Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.654292 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" event={"ID":"a43728c7-245f-4a2e-8182-613692389bac","Type":"ContainerStarted","Data":"098495ab1185e18d905ac09d11c759cc2a5bd22e7453c025dafd0fc9a190ee23"} Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.655101 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.657065 4932 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-x2nnj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.657435 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" podUID="a43728c7-245f-4a2e-8182-613692389bac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.660136 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jbmnj" event={"ID":"bbc42726-34c5-4cd6-b2b4-5e27a325adbd","Type":"ContainerDied","Data":"b8e2dc1415db45e0c52fe0ae581c8c341df26eea316e3fa1d5e98ddd8627c0fb"} Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.660266 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jbmnj" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.664499 4932 scope.go:117] "RemoveContainer" containerID="c55265913a2b9bfaa4d3b2acfc53f91ffe8a02e8c3a552f85415b3af92820bdc" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.666077 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ttjm" event={"ID":"35a4e0fc-1b46-40ea-8c9f-f284960024e6","Type":"ContainerDied","Data":"035796def1d6940e18c786cc979676eb42451c40236b0702cf6a6f2ef5de40b3"} Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.666105 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ttjm" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.670100 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gx64c" event={"ID":"4b4f0982-25bd-4a81-b00f-7b35377a893a","Type":"ContainerDied","Data":"f1f35476a21abac78b8fc8131f04ba21909aa8a27296dceebb9599d225d6e4ca"} Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.670188 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gx64c" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.679624 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" podStartSLOduration=1.679597142 podStartE2EDuration="1.679597142s" podCreationTimestamp="2026-03-21 09:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:05:01.672965547 +0000 UTC m=+405.268163816" watchObservedRunningTime="2026-03-21 09:05:01.679597142 +0000 UTC m=+405.274795411" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.688573 4932 scope.go:117] "RemoveContainer" containerID="673ea7c8d5c23fca7a6adfe55e67df0a191eda9427cd9902e6ec35fe81b61887" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.740252 4932 scope.go:117] "RemoveContainer" containerID="7d903091095909d0e0f8ac4008c8097f42862b4198451d6f17751f0932e04682" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.747291 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2kwc"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.750214 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2kwc"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.761262 4932 scope.go:117] "RemoveContainer" containerID="c22a40d8bb3fce3526514843e6c1b1f761ab4d70c763e9fe7ccd1de5b8126e23" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.772396 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wpxg"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.777248 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wpxg"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.792402 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ttjm"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.792684 4932 scope.go:117] "RemoveContainer" containerID="1d2f39a26ed17b767a1104da056fa93420bded8565e5f8c1487690df48733756" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.796502 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ttjm"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.807652 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jbmnj"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.810294 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jbmnj"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.816370 4932 scope.go:117] "RemoveContainer" containerID="70cd0ecf33ed5758a71a6bd6c73c8db98391de066a2c5fb17780c0f460cffa68" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.829880 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gx64c"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.833992 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gx64c"] Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.836752 4932 scope.go:117] "RemoveContainer" containerID="95e1b1e8ce0b1b08b5e0cceb6e864d34bca566da63baf46f9969ce8663e67bec" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.853170 4932 scope.go:117] "RemoveContainer" containerID="d5554802cc70cd1d738206d177b137e45c78ce10f6b16fd1e2c2a8ca1bff583a" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.881615 4932 scope.go:117] "RemoveContainer" containerID="d54a7199ce36da5b3a41f8a1cf92bb3489516ac76f08279748f938cc1d840fc9" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.896526 4932 scope.go:117] "RemoveContainer" containerID="9b253d0c2573dd28f7279c66daa519534de24d3614e3b41c9dab8b4b58917f71" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.912576 4932 scope.go:117] "RemoveContainer" containerID="9704b304a58206773164f472b8531c6200388b63e82ab1b3509901f28aaeb999" Mar 21 09:05:01 crc kubenswrapper[4932]: I0321 09:05:01.926959 4932 scope.go:117] "RemoveContainer" containerID="f9c6a60a9771d8f525b1289ebd3e0394ee86ce9389351f99465d03b3e6675b56" Mar 21 09:05:02 crc kubenswrapper[4932]: I0321 09:05:02.690266 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-x2nnj" Mar 21 09:05:03 crc kubenswrapper[4932]: I0321 09:05:03.711121 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29534428-e319-412a-a850-53b180783073" path="/var/lib/kubelet/pods/29534428-e319-412a-a850-53b180783073/volumes" Mar 21 09:05:03 crc kubenswrapper[4932]: I0321 09:05:03.714222 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" path="/var/lib/kubelet/pods/35a4e0fc-1b46-40ea-8c9f-f284960024e6/volumes" Mar 21 09:05:03 crc kubenswrapper[4932]: I0321 09:05:03.715570 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" path="/var/lib/kubelet/pods/4b4f0982-25bd-4a81-b00f-7b35377a893a/volumes" Mar 21 09:05:03 crc kubenswrapper[4932]: I0321 09:05:03.717735 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" path="/var/lib/kubelet/pods/bbc42726-34c5-4cd6-b2b4-5e27a325adbd/volumes" Mar 21 09:05:03 crc kubenswrapper[4932]: I0321 09:05:03.719227 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" path="/var/lib/kubelet/pods/eb4e7142-4148-4ebc-864d-7f7c6cfbf237/volumes" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.395750 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h44lz"] Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396662 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396675 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396685 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396691 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396704 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="extract-utilities" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396710 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="extract-utilities" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396718 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29534428-e319-412a-a850-53b180783073" containerName="extract-content" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396725 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29534428-e319-412a-a850-53b180783073" containerName="extract-content" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396733 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerName="extract-utilities" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396738 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerName="extract-utilities" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396747 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" containerName="marketplace-operator" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396754 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" containerName="marketplace-operator" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396761 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerName="extract-content" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396767 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerName="extract-content" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396775 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29534428-e319-412a-a850-53b180783073" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396781 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29534428-e319-412a-a850-53b180783073" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396794 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29534428-e319-412a-a850-53b180783073" containerName="extract-utilities" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396799 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29534428-e319-412a-a850-53b180783073" containerName="extract-utilities" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396807 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerName="extract-utilities" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396813 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerName="extract-utilities" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396823 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerName="extract-content" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396829 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerName="extract-content" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396836 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396842 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: E0321 09:05:55.396850 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="extract-content" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396855 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="extract-content" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396946 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a4e0fc-1b46-40ea-8c9f-f284960024e6" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396956 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="29534428-e319-412a-a850-53b180783073" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396963 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc42726-34c5-4cd6-b2b4-5e27a325adbd" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396970 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b4f0982-25bd-4a81-b00f-7b35377a893a" containerName="registry-server" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.396980 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4e7142-4148-4ebc-864d-7f7c6cfbf237" containerName="marketplace-operator" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.397742 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.400826 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.410966 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h44lz"] Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.515281 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s82rw\" (UniqueName: \"kubernetes.io/projected/184f08a1-0394-4048-aadd-4bce2dfbd1e5-kube-api-access-s82rw\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.515397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184f08a1-0394-4048-aadd-4bce2dfbd1e5-catalog-content\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.515482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184f08a1-0394-4048-aadd-4bce2dfbd1e5-utilities\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.588467 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zflgp"] Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.589508 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.593405 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.606514 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zflgp"] Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.616233 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184f08a1-0394-4048-aadd-4bce2dfbd1e5-catalog-content\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.616313 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184f08a1-0394-4048-aadd-4bce2dfbd1e5-utilities\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.616421 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s82rw\" (UniqueName: \"kubernetes.io/projected/184f08a1-0394-4048-aadd-4bce2dfbd1e5-kube-api-access-s82rw\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.616783 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184f08a1-0394-4048-aadd-4bce2dfbd1e5-catalog-content\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.617023 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184f08a1-0394-4048-aadd-4bce2dfbd1e5-utilities\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.647829 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s82rw\" (UniqueName: \"kubernetes.io/projected/184f08a1-0394-4048-aadd-4bce2dfbd1e5-kube-api-access-s82rw\") pod \"community-operators-h44lz\" (UID: \"184f08a1-0394-4048-aadd-4bce2dfbd1e5\") " pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.717875 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3012669a-397a-40bb-84a1-d53f4d3bb944-utilities\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.717943 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch5ls\" (UniqueName: \"kubernetes.io/projected/3012669a-397a-40bb-84a1-d53f4d3bb944-kube-api-access-ch5ls\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.717984 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3012669a-397a-40bb-84a1-d53f4d3bb944-catalog-content\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.765843 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.818796 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch5ls\" (UniqueName: \"kubernetes.io/projected/3012669a-397a-40bb-84a1-d53f4d3bb944-kube-api-access-ch5ls\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.818896 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3012669a-397a-40bb-84a1-d53f4d3bb944-catalog-content\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.818974 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3012669a-397a-40bb-84a1-d53f4d3bb944-utilities\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.821341 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3012669a-397a-40bb-84a1-d53f4d3bb944-catalog-content\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.821439 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3012669a-397a-40bb-84a1-d53f4d3bb944-utilities\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.836628 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch5ls\" (UniqueName: \"kubernetes.io/projected/3012669a-397a-40bb-84a1-d53f4d3bb944-kube-api-access-ch5ls\") pod \"certified-operators-zflgp\" (UID: \"3012669a-397a-40bb-84a1-d53f4d3bb944\") " pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:55 crc kubenswrapper[4932]: I0321 09:05:55.905621 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:05:56 crc kubenswrapper[4932]: I0321 09:05:56.118426 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zflgp"] Mar 21 09:05:56 crc kubenswrapper[4932]: I0321 09:05:56.192604 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h44lz"] Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.020100 4932 generic.go:334] "Generic (PLEG): container finished" podID="184f08a1-0394-4048-aadd-4bce2dfbd1e5" containerID="b11aec88d59366f421a5d06406f01b102800ff3ba0d7c10904b6511d344dafd0" exitCode=0 Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.020161 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h44lz" event={"ID":"184f08a1-0394-4048-aadd-4bce2dfbd1e5","Type":"ContainerDied","Data":"b11aec88d59366f421a5d06406f01b102800ff3ba0d7c10904b6511d344dafd0"} Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.020214 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h44lz" event={"ID":"184f08a1-0394-4048-aadd-4bce2dfbd1e5","Type":"ContainerStarted","Data":"6c8339c23012adc1298f75bcf2d4ade1701cee5035f1c1b111c0617af150e95d"} Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.022270 4932 generic.go:334] "Generic (PLEG): container finished" podID="3012669a-397a-40bb-84a1-d53f4d3bb944" containerID="d363ed528ca7178ef374ef0d629e2adfde578735861cf1ce84bf79f3822c59ea" exitCode=0 Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.022315 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflgp" event={"ID":"3012669a-397a-40bb-84a1-d53f4d3bb944","Type":"ContainerDied","Data":"d363ed528ca7178ef374ef0d629e2adfde578735861cf1ce84bf79f3822c59ea"} Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.022385 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflgp" event={"ID":"3012669a-397a-40bb-84a1-d53f4d3bb944","Type":"ContainerStarted","Data":"9d064f9ac90973ee61d9828df24cc3c0ac8de571229fbd90e2625ed72b5ae41f"} Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.788995 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gqjtm"] Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.793136 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.796047 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.805626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqjtm"] Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.852980 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f9605a-9fcc-4483-870a-e5075598662e-utilities\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.853031 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f9605a-9fcc-4483-870a-e5075598662e-catalog-content\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.853062 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnx94\" (UniqueName: \"kubernetes.io/projected/09f9605a-9fcc-4483-870a-e5075598662e-kube-api-access-pnx94\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.955109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f9605a-9fcc-4483-870a-e5075598662e-catalog-content\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.955578 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f9605a-9fcc-4483-870a-e5075598662e-utilities\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.955715 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnx94\" (UniqueName: \"kubernetes.io/projected/09f9605a-9fcc-4483-870a-e5075598662e-kube-api-access-pnx94\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.956776 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09f9605a-9fcc-4483-870a-e5075598662e-catalog-content\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.957165 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09f9605a-9fcc-4483-870a-e5075598662e-utilities\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:57 crc kubenswrapper[4932]: I0321 09:05:57.977231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnx94\" (UniqueName: \"kubernetes.io/projected/09f9605a-9fcc-4483-870a-e5075598662e-kube-api-access-pnx94\") pod \"redhat-marketplace-gqjtm\" (UID: \"09f9605a-9fcc-4483-870a-e5075598662e\") " pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.005917 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7s5fk"] Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.007457 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.009597 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.022381 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7s5fk"] Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.030290 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h44lz" event={"ID":"184f08a1-0394-4048-aadd-4bce2dfbd1e5","Type":"ContainerStarted","Data":"073a35ec13ddfcd39f713ef508ea4b218fd0d782ae2b804bdcc64e9927882590"} Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.033190 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflgp" event={"ID":"3012669a-397a-40bb-84a1-d53f4d3bb944","Type":"ContainerStarted","Data":"d6fee7b41d2a3bec62d0751fadb8ee57f97b3bc8c4f9127b38467416d9075bfc"} Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.058423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-catalog-content\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.058499 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvb7v\" (UniqueName: \"kubernetes.io/projected/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-kube-api-access-pvb7v\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.058693 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-utilities\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.117475 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.161004 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-utilities\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.161088 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-catalog-content\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.161123 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvb7v\" (UniqueName: \"kubernetes.io/projected/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-kube-api-access-pvb7v\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.162659 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-catalog-content\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.162796 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-utilities\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.184001 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvb7v\" (UniqueName: \"kubernetes.io/projected/3f2bb9ef-e246-4b13-a8a4-a9d2135fb743-kube-api-access-pvb7v\") pod \"redhat-operators-7s5fk\" (UID: \"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743\") " pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.324203 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.533602 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqjtm"] Mar 21 09:05:58 crc kubenswrapper[4932]: W0321 09:05:58.533776 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09f9605a_9fcc_4483_870a_e5075598662e.slice/crio-0f01c59e026fb5c8314ff70876c541dffd8c6ceb8eae0b1e83a4f8f9f8a891eb WatchSource:0}: Error finding container 0f01c59e026fb5c8314ff70876c541dffd8c6ceb8eae0b1e83a4f8f9f8a891eb: Status 404 returned error can't find the container with id 0f01c59e026fb5c8314ff70876c541dffd8c6ceb8eae0b1e83a4f8f9f8a891eb Mar 21 09:05:58 crc kubenswrapper[4932]: I0321 09:05:58.815097 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7s5fk"] Mar 21 09:05:58 crc kubenswrapper[4932]: W0321 09:05:58.823860 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f2bb9ef_e246_4b13_a8a4_a9d2135fb743.slice/crio-c0aa733217f92c7a3a0626d4f552915491349803584c4eabc1f5ee0188f2033d WatchSource:0}: Error finding container c0aa733217f92c7a3a0626d4f552915491349803584c4eabc1f5ee0188f2033d: Status 404 returned error can't find the container with id c0aa733217f92c7a3a0626d4f552915491349803584c4eabc1f5ee0188f2033d Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.041607 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f2bb9ef-e246-4b13-a8a4-a9d2135fb743" containerID="926d2386153e426443ac77e1d108f4b14b721533f664f9cdea659d409689d83e" exitCode=0 Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.041788 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7s5fk" event={"ID":"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743","Type":"ContainerDied","Data":"926d2386153e426443ac77e1d108f4b14b721533f664f9cdea659d409689d83e"} Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.042067 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7s5fk" event={"ID":"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743","Type":"ContainerStarted","Data":"c0aa733217f92c7a3a0626d4f552915491349803584c4eabc1f5ee0188f2033d"} Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.045308 4932 generic.go:334] "Generic (PLEG): container finished" podID="09f9605a-9fcc-4483-870a-e5075598662e" containerID="ba4ff945dd96d77fbdfecd23d9b63074fe70246e451fc66eb554aa7c24e41908" exitCode=0 Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.045413 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqjtm" event={"ID":"09f9605a-9fcc-4483-870a-e5075598662e","Type":"ContainerDied","Data":"ba4ff945dd96d77fbdfecd23d9b63074fe70246e451fc66eb554aa7c24e41908"} Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.045446 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqjtm" event={"ID":"09f9605a-9fcc-4483-870a-e5075598662e","Type":"ContainerStarted","Data":"0f01c59e026fb5c8314ff70876c541dffd8c6ceb8eae0b1e83a4f8f9f8a891eb"} Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.053056 4932 generic.go:334] "Generic (PLEG): container finished" podID="184f08a1-0394-4048-aadd-4bce2dfbd1e5" containerID="073a35ec13ddfcd39f713ef508ea4b218fd0d782ae2b804bdcc64e9927882590" exitCode=0 Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.053252 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h44lz" event={"ID":"184f08a1-0394-4048-aadd-4bce2dfbd1e5","Type":"ContainerDied","Data":"073a35ec13ddfcd39f713ef508ea4b218fd0d782ae2b804bdcc64e9927882590"} Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.056470 4932 generic.go:334] "Generic (PLEG): container finished" podID="3012669a-397a-40bb-84a1-d53f4d3bb944" containerID="d6fee7b41d2a3bec62d0751fadb8ee57f97b3bc8c4f9127b38467416d9075bfc" exitCode=0 Mar 21 09:05:59 crc kubenswrapper[4932]: I0321 09:05:59.056713 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflgp" event={"ID":"3012669a-397a-40bb-84a1-d53f4d3bb944","Type":"ContainerDied","Data":"d6fee7b41d2a3bec62d0751fadb8ee57f97b3bc8c4f9127b38467416d9075bfc"} Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.066780 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h44lz" event={"ID":"184f08a1-0394-4048-aadd-4bce2dfbd1e5","Type":"ContainerStarted","Data":"b5722b09e5f9a50f61eec98c374d44428c6eb13a95a38113f4f0f50117731be3"} Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.071235 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflgp" event={"ID":"3012669a-397a-40bb-84a1-d53f4d3bb944","Type":"ContainerStarted","Data":"2ff839bfa8bfabb3cb8b76971cf0132ca4551daa21ca350e4688ddb2583ff554"} Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.073940 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7s5fk" event={"ID":"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743","Type":"ContainerStarted","Data":"1d5695b37885eb3448bf436d36898b5dd30ddd8f71961a8b75ec1726fd4df20b"} Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.076264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqjtm" event={"ID":"09f9605a-9fcc-4483-870a-e5075598662e","Type":"ContainerStarted","Data":"8898b9056c18141c0ca9bccaf1cf0551b41022b66d5018d60c5dfa4a7fc3b7d4"} Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.092212 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h44lz" podStartSLOduration=2.504595948 podStartE2EDuration="5.092190225s" podCreationTimestamp="2026-03-21 09:05:55 +0000 UTC" firstStartedPulling="2026-03-21 09:05:57.02281761 +0000 UTC m=+460.618015889" lastFinishedPulling="2026-03-21 09:05:59.610411887 +0000 UTC m=+463.205610166" observedRunningTime="2026-03-21 09:06:00.090730399 +0000 UTC m=+463.685928658" watchObservedRunningTime="2026-03-21 09:06:00.092190225 +0000 UTC m=+463.687388494" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.165650 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568066-dq5zd"] Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.168549 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.173324 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.173404 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.173659 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.176989 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zflgp" podStartSLOduration=2.410612184 podStartE2EDuration="5.176959259s" podCreationTimestamp="2026-03-21 09:05:55 +0000 UTC" firstStartedPulling="2026-03-21 09:05:57.023695803 +0000 UTC m=+460.618894112" lastFinishedPulling="2026-03-21 09:05:59.790042918 +0000 UTC m=+463.385241187" observedRunningTime="2026-03-21 09:06:00.165230765 +0000 UTC m=+463.760429034" watchObservedRunningTime="2026-03-21 09:06:00.176959259 +0000 UTC m=+463.772157528" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.193674 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568066-dq5zd"] Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.225909 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.225978 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.291079 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7tdc\" (UniqueName: \"kubernetes.io/projected/6fa66e26-f3bb-4771-8bdd-c349fadbac4e-kube-api-access-m7tdc\") pod \"auto-csr-approver-29568066-dq5zd\" (UID: \"6fa66e26-f3bb-4771-8bdd-c349fadbac4e\") " pod="openshift-infra/auto-csr-approver-29568066-dq5zd" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.392534 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7tdc\" (UniqueName: \"kubernetes.io/projected/6fa66e26-f3bb-4771-8bdd-c349fadbac4e-kube-api-access-m7tdc\") pod \"auto-csr-approver-29568066-dq5zd\" (UID: \"6fa66e26-f3bb-4771-8bdd-c349fadbac4e\") " pod="openshift-infra/auto-csr-approver-29568066-dq5zd" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.422551 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7tdc\" (UniqueName: \"kubernetes.io/projected/6fa66e26-f3bb-4771-8bdd-c349fadbac4e-kube-api-access-m7tdc\") pod \"auto-csr-approver-29568066-dq5zd\" (UID: \"6fa66e26-f3bb-4771-8bdd-c349fadbac4e\") " pod="openshift-infra/auto-csr-approver-29568066-dq5zd" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.557567 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" Mar 21 09:06:00 crc kubenswrapper[4932]: I0321 09:06:00.783948 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568066-dq5zd"] Mar 21 09:06:01 crc kubenswrapper[4932]: I0321 09:06:01.087816 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f2bb9ef-e246-4b13-a8a4-a9d2135fb743" containerID="1d5695b37885eb3448bf436d36898b5dd30ddd8f71961a8b75ec1726fd4df20b" exitCode=0 Mar 21 09:06:01 crc kubenswrapper[4932]: I0321 09:06:01.087943 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7s5fk" event={"ID":"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743","Type":"ContainerDied","Data":"1d5695b37885eb3448bf436d36898b5dd30ddd8f71961a8b75ec1726fd4df20b"} Mar 21 09:06:01 crc kubenswrapper[4932]: I0321 09:06:01.090261 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" event={"ID":"6fa66e26-f3bb-4771-8bdd-c349fadbac4e","Type":"ContainerStarted","Data":"bc58e496fa19d45e13b0e9acbfc3ec49d9e4df125b3c1c133517780ad6904c0f"} Mar 21 09:06:01 crc kubenswrapper[4932]: I0321 09:06:01.095577 4932 generic.go:334] "Generic (PLEG): container finished" podID="09f9605a-9fcc-4483-870a-e5075598662e" containerID="8898b9056c18141c0ca9bccaf1cf0551b41022b66d5018d60c5dfa4a7fc3b7d4" exitCode=0 Mar 21 09:06:01 crc kubenswrapper[4932]: I0321 09:06:01.096260 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqjtm" event={"ID":"09f9605a-9fcc-4483-870a-e5075598662e","Type":"ContainerDied","Data":"8898b9056c18141c0ca9bccaf1cf0551b41022b66d5018d60c5dfa4a7fc3b7d4"} Mar 21 09:06:02 crc kubenswrapper[4932]: I0321 09:06:02.103134 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqjtm" event={"ID":"09f9605a-9fcc-4483-870a-e5075598662e","Type":"ContainerStarted","Data":"70b2b806b4148e91b5804f627c03a232a5234931a503407648256733a1a2b131"} Mar 21 09:06:02 crc kubenswrapper[4932]: I0321 09:06:02.106387 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7s5fk" event={"ID":"3f2bb9ef-e246-4b13-a8a4-a9d2135fb743","Type":"ContainerStarted","Data":"6c30817b2ecf0529e458bac432a286c14d0320713f8d98196964eb9deb8f1ddf"} Mar 21 09:06:02 crc kubenswrapper[4932]: I0321 09:06:02.108636 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" event={"ID":"6fa66e26-f3bb-4771-8bdd-c349fadbac4e","Type":"ContainerStarted","Data":"fe0e9c371b6444506c05678de87fe5477a113918227e28a193e5c1962853c1c4"} Mar 21 09:06:02 crc kubenswrapper[4932]: I0321 09:06:02.126576 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gqjtm" podStartSLOduration=2.391126351 podStartE2EDuration="5.126559084s" podCreationTimestamp="2026-03-21 09:05:57 +0000 UTC" firstStartedPulling="2026-03-21 09:05:59.049953895 +0000 UTC m=+462.645152164" lastFinishedPulling="2026-03-21 09:06:01.785386638 +0000 UTC m=+465.380584897" observedRunningTime="2026-03-21 09:06:02.122780982 +0000 UTC m=+465.717979251" watchObservedRunningTime="2026-03-21 09:06:02.126559084 +0000 UTC m=+465.721757353" Mar 21 09:06:02 crc kubenswrapper[4932]: I0321 09:06:02.140785 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" podStartSLOduration=1.376034417 podStartE2EDuration="2.140767393s" podCreationTimestamp="2026-03-21 09:06:00 +0000 UTC" firstStartedPulling="2026-03-21 09:06:00.801675018 +0000 UTC m=+464.396873297" lastFinishedPulling="2026-03-21 09:06:01.566408004 +0000 UTC m=+465.161606273" observedRunningTime="2026-03-21 09:06:02.13462591 +0000 UTC m=+465.729824179" watchObservedRunningTime="2026-03-21 09:06:02.140767393 +0000 UTC m=+465.735965652" Mar 21 09:06:03 crc kubenswrapper[4932]: I0321 09:06:03.121082 4932 generic.go:334] "Generic (PLEG): container finished" podID="6fa66e26-f3bb-4771-8bdd-c349fadbac4e" containerID="fe0e9c371b6444506c05678de87fe5477a113918227e28a193e5c1962853c1c4" exitCode=0 Mar 21 09:06:03 crc kubenswrapper[4932]: I0321 09:06:03.122904 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" event={"ID":"6fa66e26-f3bb-4771-8bdd-c349fadbac4e","Type":"ContainerDied","Data":"fe0e9c371b6444506c05678de87fe5477a113918227e28a193e5c1962853c1c4"} Mar 21 09:06:03 crc kubenswrapper[4932]: I0321 09:06:03.138511 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7s5fk" podStartSLOduration=3.67545172 podStartE2EDuration="6.138488405s" podCreationTimestamp="2026-03-21 09:05:57 +0000 UTC" firstStartedPulling="2026-03-21 09:05:59.043684347 +0000 UTC m=+462.638882616" lastFinishedPulling="2026-03-21 09:06:01.506721022 +0000 UTC m=+465.101919301" observedRunningTime="2026-03-21 09:06:02.1573056 +0000 UTC m=+465.752503879" watchObservedRunningTime="2026-03-21 09:06:03.138488405 +0000 UTC m=+466.733686684" Mar 21 09:06:04 crc kubenswrapper[4932]: I0321 09:06:04.362404 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" Mar 21 09:06:04 crc kubenswrapper[4932]: I0321 09:06:04.447327 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7tdc\" (UniqueName: \"kubernetes.io/projected/6fa66e26-f3bb-4771-8bdd-c349fadbac4e-kube-api-access-m7tdc\") pod \"6fa66e26-f3bb-4771-8bdd-c349fadbac4e\" (UID: \"6fa66e26-f3bb-4771-8bdd-c349fadbac4e\") " Mar 21 09:06:04 crc kubenswrapper[4932]: I0321 09:06:04.461584 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa66e26-f3bb-4771-8bdd-c349fadbac4e-kube-api-access-m7tdc" (OuterVolumeSpecName: "kube-api-access-m7tdc") pod "6fa66e26-f3bb-4771-8bdd-c349fadbac4e" (UID: "6fa66e26-f3bb-4771-8bdd-c349fadbac4e"). InnerVolumeSpecName "kube-api-access-m7tdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:06:04 crc kubenswrapper[4932]: I0321 09:06:04.548280 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7tdc\" (UniqueName: \"kubernetes.io/projected/6fa66e26-f3bb-4771-8bdd-c349fadbac4e-kube-api-access-m7tdc\") on node \"crc\" DevicePath \"\"" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.135786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" event={"ID":"6fa66e26-f3bb-4771-8bdd-c349fadbac4e","Type":"ContainerDied","Data":"bc58e496fa19d45e13b0e9acbfc3ec49d9e4df125b3c1c133517780ad6904c0f"} Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.135847 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc58e496fa19d45e13b0e9acbfc3ec49d9e4df125b3c1c133517780ad6904c0f" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.135853 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568066-dq5zd" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.425014 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568060-6lptj"] Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.427911 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568060-6lptj"] Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.709798 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d9430a-2f41-4dac-bfa7-9fa47a85db9a" path="/var/lib/kubelet/pods/64d9430a-2f41-4dac-bfa7-9fa47a85db9a/volumes" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.766074 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.766133 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.814024 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.905819 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.905866 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:06:05 crc kubenswrapper[4932]: I0321 09:06:05.954683 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.184229 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zflgp" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.186571 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h44lz" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.635501 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tgzpt"] Mar 21 09:06:06 crc kubenswrapper[4932]: E0321 09:06:06.635860 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa66e26-f3bb-4771-8bdd-c349fadbac4e" containerName="oc" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.635885 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa66e26-f3bb-4771-8bdd-c349fadbac4e" containerName="oc" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.636138 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa66e26-f3bb-4771-8bdd-c349fadbac4e" containerName="oc" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.636818 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.664222 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tgzpt"] Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.677133 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.677196 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-registry-tls\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.677226 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-bound-sa-token\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.677266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw7qc\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-kube-api-access-rw7qc\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.677303 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.677446 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.677685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-trusted-ca\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.677740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-registry-certificates\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.732922 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.778827 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw7qc\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-kube-api-access-rw7qc\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.778910 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.778993 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-trusted-ca\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.779023 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-registry-certificates\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.779049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.779070 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-registry-tls\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.779089 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-bound-sa-token\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.781739 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-registry-certificates\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.782009 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.783235 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-trusted-ca\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.787507 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.787507 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-registry-tls\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.800503 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw7qc\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-kube-api-access-rw7qc\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.800676 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df8f4d96-f8ab-4b6b-82a1-f566f7a7d023-bound-sa-token\") pod \"image-registry-66df7c8f76-tgzpt\" (UID: \"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023\") " pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:06 crc kubenswrapper[4932]: I0321 09:06:06.956429 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:07 crc kubenswrapper[4932]: I0321 09:06:07.389218 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tgzpt"] Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.118184 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.118246 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.179879 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" event={"ID":"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023","Type":"ContainerStarted","Data":"ab0e124d109d83fc1f8b3e1f34c94ee745c67794caf8c2d43db614e424191579"} Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.179942 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" event={"ID":"df8f4d96-f8ab-4b6b-82a1-f566f7a7d023","Type":"ContainerStarted","Data":"13e532d1de214008a3ac001a10bfc2368c98acbfe418e849676ec3de08223196"} Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.180077 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.186521 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.210498 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" podStartSLOduration=2.210469055 podStartE2EDuration="2.210469055s" podCreationTimestamp="2026-03-21 09:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:06:08.206866538 +0000 UTC m=+471.802064817" watchObservedRunningTime="2026-03-21 09:06:08.210469055 +0000 UTC m=+471.805667354" Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.225597 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gqjtm" Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.324529 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:06:08 crc kubenswrapper[4932]: I0321 09:06:08.324574 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:06:09 crc kubenswrapper[4932]: I0321 09:06:09.363719 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7s5fk" podUID="3f2bb9ef-e246-4b13-a8a4-a9d2135fb743" containerName="registry-server" probeResult="failure" output=< Mar 21 09:06:09 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 09:06:09 crc kubenswrapper[4932]: > Mar 21 09:06:18 crc kubenswrapper[4932]: I0321 09:06:18.369372 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:06:18 crc kubenswrapper[4932]: I0321 09:06:18.421445 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7s5fk" Mar 21 09:06:26 crc kubenswrapper[4932]: I0321 09:06:26.965136 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tgzpt" Mar 21 09:06:27 crc kubenswrapper[4932]: I0321 09:06:27.046199 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kbfjw"] Mar 21 09:06:30 crc kubenswrapper[4932]: I0321 09:06:30.227149 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:06:30 crc kubenswrapper[4932]: I0321 09:06:30.227285 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.097877 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" podUID="32d20a8b-1cb1-4ca7-a47c-0b5325424e43" containerName="registry" containerID="cri-o://c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538" gracePeriod=30 Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.471014 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.493528 4932 generic.go:334] "Generic (PLEG): container finished" podID="32d20a8b-1cb1-4ca7-a47c-0b5325424e43" containerID="c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538" exitCode=0 Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.493770 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" event={"ID":"32d20a8b-1cb1-4ca7-a47c-0b5325424e43","Type":"ContainerDied","Data":"c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538"} Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.493890 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" event={"ID":"32d20a8b-1cb1-4ca7-a47c-0b5325424e43","Type":"ContainerDied","Data":"04bc17828ed7c761cdd3606f5d6a4f65be267d870890f75af826bd6e9fa43405"} Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.493951 4932 scope.go:117] "RemoveContainer" containerID="c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.494131 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kbfjw" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.513445 4932 scope.go:117] "RemoveContainer" containerID="c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538" Mar 21 09:06:52 crc kubenswrapper[4932]: E0321 09:06:52.514107 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538\": container with ID starting with c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538 not found: ID does not exist" containerID="c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.514190 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538"} err="failed to get container status \"c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538\": rpc error: code = NotFound desc = could not find container \"c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538\": container with ID starting with c0bfe0407b39504d02939075dbcf72370fa7184599d61bae0623614e889e4538 not found: ID does not exist" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.627688 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-tls\") pod \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.627761 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-bound-sa-token\") pod \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.627819 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-trusted-ca\") pod \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.628208 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.628375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-certificates\") pod \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.628416 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbwbd\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-kube-api-access-bbwbd\") pod \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.628459 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-ca-trust-extracted\") pod \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.628513 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-installation-pull-secrets\") pod \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\" (UID: \"32d20a8b-1cb1-4ca7-a47c-0b5325424e43\") " Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.630926 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "32d20a8b-1cb1-4ca7-a47c-0b5325424e43" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.631458 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "32d20a8b-1cb1-4ca7-a47c-0b5325424e43" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.635538 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-kube-api-access-bbwbd" (OuterVolumeSpecName: "kube-api-access-bbwbd") pod "32d20a8b-1cb1-4ca7-a47c-0b5325424e43" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43"). InnerVolumeSpecName "kube-api-access-bbwbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.638436 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "32d20a8b-1cb1-4ca7-a47c-0b5325424e43" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.639481 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "32d20a8b-1cb1-4ca7-a47c-0b5325424e43" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.640683 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "32d20a8b-1cb1-4ca7-a47c-0b5325424e43" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.646107 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "32d20a8b-1cb1-4ca7-a47c-0b5325424e43" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.659509 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "32d20a8b-1cb1-4ca7-a47c-0b5325424e43" (UID: "32d20a8b-1cb1-4ca7-a47c-0b5325424e43"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.739756 4932 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.739830 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.739845 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.739861 4932 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.739882 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbwbd\" (UniqueName: \"kubernetes.io/projected/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-kube-api-access-bbwbd\") on node \"crc\" DevicePath \"\"" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.739901 4932 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.739915 4932 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32d20a8b-1cb1-4ca7-a47c-0b5325424e43-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.832156 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kbfjw"] Mar 21 09:06:52 crc kubenswrapper[4932]: I0321 09:06:52.839918 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kbfjw"] Mar 21 09:06:53 crc kubenswrapper[4932]: I0321 09:06:53.710384 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d20a8b-1cb1-4ca7-a47c-0b5325424e43" path="/var/lib/kubelet/pods/32d20a8b-1cb1-4ca7-a47c-0b5325424e43/volumes" Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.227160 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.228237 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.228332 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.229606 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6d1a99812a9df2e3a0aed95e8032e795b8e981cbae4b19e570cfe3a8c155d8a"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.229768 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://d6d1a99812a9df2e3a0aed95e8032e795b8e981cbae4b19e570cfe3a8c155d8a" gracePeriod=600 Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.556561 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="d6d1a99812a9df2e3a0aed95e8032e795b8e981cbae4b19e570cfe3a8c155d8a" exitCode=0 Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.556685 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"d6d1a99812a9df2e3a0aed95e8032e795b8e981cbae4b19e570cfe3a8c155d8a"} Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.557109 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"edd87e1d2bb0a42fc79fe0fc8fafbf90d9697e65cfd8fc6baf73b7211d563b23"} Mar 21 09:07:00 crc kubenswrapper[4932]: I0321 09:07:00.557145 4932 scope.go:117] "RemoveContainer" containerID="f1509f41cc97ef933d59df486632ea55fad7536ab240d59bd502b46635f0dbfb" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.153236 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568068-zdq5z"] Mar 21 09:08:00 crc kubenswrapper[4932]: E0321 09:08:00.156214 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d20a8b-1cb1-4ca7-a47c-0b5325424e43" containerName="registry" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.156413 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d20a8b-1cb1-4ca7-a47c-0b5325424e43" containerName="registry" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.156725 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d20a8b-1cb1-4ca7-a47c-0b5325424e43" containerName="registry" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.157757 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568068-zdq5z" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.161025 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.161647 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.162562 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.175271 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568068-zdq5z"] Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.267241 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjqh2\" (UniqueName: \"kubernetes.io/projected/50e36e90-607d-4a4d-9736-ab89fe133c9a-kube-api-access-pjqh2\") pod \"auto-csr-approver-29568068-zdq5z\" (UID: \"50e36e90-607d-4a4d-9736-ab89fe133c9a\") " pod="openshift-infra/auto-csr-approver-29568068-zdq5z" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.369206 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjqh2\" (UniqueName: \"kubernetes.io/projected/50e36e90-607d-4a4d-9736-ab89fe133c9a-kube-api-access-pjqh2\") pod \"auto-csr-approver-29568068-zdq5z\" (UID: \"50e36e90-607d-4a4d-9736-ab89fe133c9a\") " pod="openshift-infra/auto-csr-approver-29568068-zdq5z" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.394300 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjqh2\" (UniqueName: \"kubernetes.io/projected/50e36e90-607d-4a4d-9736-ab89fe133c9a-kube-api-access-pjqh2\") pod \"auto-csr-approver-29568068-zdq5z\" (UID: \"50e36e90-607d-4a4d-9736-ab89fe133c9a\") " pod="openshift-infra/auto-csr-approver-29568068-zdq5z" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.488252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568068-zdq5z" Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.911886 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568068-zdq5z"] Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.923418 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:08:00 crc kubenswrapper[4932]: I0321 09:08:00.991551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568068-zdq5z" event={"ID":"50e36e90-607d-4a4d-9736-ab89fe133c9a","Type":"ContainerStarted","Data":"d23f252fd6c7df7b3fe4985555e7c89fa6ec5a923d7acd4d1b45dde6a617f64d"} Mar 21 09:08:03 crc kubenswrapper[4932]: I0321 09:08:03.008691 4932 generic.go:334] "Generic (PLEG): container finished" podID="50e36e90-607d-4a4d-9736-ab89fe133c9a" containerID="e741ac433bf70b841f9c844875a94b5cea4c853e26f8aef7bd947bad3f799475" exitCode=0 Mar 21 09:08:03 crc kubenswrapper[4932]: I0321 09:08:03.008784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568068-zdq5z" event={"ID":"50e36e90-607d-4a4d-9736-ab89fe133c9a","Type":"ContainerDied","Data":"e741ac433bf70b841f9c844875a94b5cea4c853e26f8aef7bd947bad3f799475"} Mar 21 09:08:04 crc kubenswrapper[4932]: I0321 09:08:04.246494 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568068-zdq5z" Mar 21 09:08:04 crc kubenswrapper[4932]: I0321 09:08:04.333814 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjqh2\" (UniqueName: \"kubernetes.io/projected/50e36e90-607d-4a4d-9736-ab89fe133c9a-kube-api-access-pjqh2\") pod \"50e36e90-607d-4a4d-9736-ab89fe133c9a\" (UID: \"50e36e90-607d-4a4d-9736-ab89fe133c9a\") " Mar 21 09:08:04 crc kubenswrapper[4932]: I0321 09:08:04.343767 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e36e90-607d-4a4d-9736-ab89fe133c9a-kube-api-access-pjqh2" (OuterVolumeSpecName: "kube-api-access-pjqh2") pod "50e36e90-607d-4a4d-9736-ab89fe133c9a" (UID: "50e36e90-607d-4a4d-9736-ab89fe133c9a"). InnerVolumeSpecName "kube-api-access-pjqh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:08:04 crc kubenswrapper[4932]: I0321 09:08:04.435315 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjqh2\" (UniqueName: \"kubernetes.io/projected/50e36e90-607d-4a4d-9736-ab89fe133c9a-kube-api-access-pjqh2\") on node \"crc\" DevicePath \"\"" Mar 21 09:08:05 crc kubenswrapper[4932]: I0321 09:08:05.023991 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568068-zdq5z" event={"ID":"50e36e90-607d-4a4d-9736-ab89fe133c9a","Type":"ContainerDied","Data":"d23f252fd6c7df7b3fe4985555e7c89fa6ec5a923d7acd4d1b45dde6a617f64d"} Mar 21 09:08:05 crc kubenswrapper[4932]: I0321 09:08:05.024078 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23f252fd6c7df7b3fe4985555e7c89fa6ec5a923d7acd4d1b45dde6a617f64d" Mar 21 09:08:05 crc kubenswrapper[4932]: I0321 09:08:05.024102 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568068-zdq5z" Mar 21 09:08:05 crc kubenswrapper[4932]: I0321 09:08:05.334415 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568062-ckbkj"] Mar 21 09:08:05 crc kubenswrapper[4932]: I0321 09:08:05.339090 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568062-ckbkj"] Mar 21 09:08:05 crc kubenswrapper[4932]: I0321 09:08:05.714099 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026bb1a2-7881-45a8-8845-53d8bbcb4166" path="/var/lib/kubelet/pods/026bb1a2-7881-45a8-8845-53d8bbcb4166/volumes" Mar 21 09:09:00 crc kubenswrapper[4932]: I0321 09:09:00.225883 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:09:00 crc kubenswrapper[4932]: I0321 09:09:00.226794 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:09:30 crc kubenswrapper[4932]: I0321 09:09:30.225474 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:09:30 crc kubenswrapper[4932]: I0321 09:09:30.226575 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.156511 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568070-hk5lh"] Mar 21 09:10:00 crc kubenswrapper[4932]: E0321 09:10:00.160218 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e36e90-607d-4a4d-9736-ab89fe133c9a" containerName="oc" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.160458 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e36e90-607d-4a4d-9736-ab89fe133c9a" containerName="oc" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.160929 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e36e90-607d-4a4d-9736-ab89fe133c9a" containerName="oc" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.162215 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.166431 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.166547 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.166653 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.168399 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568070-hk5lh"] Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.225724 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.225812 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.225881 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.226907 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edd87e1d2bb0a42fc79fe0fc8fafbf90d9697e65cfd8fc6baf73b7211d563b23"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.227017 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://edd87e1d2bb0a42fc79fe0fc8fafbf90d9697e65cfd8fc6baf73b7211d563b23" gracePeriod=600 Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.330947 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qttc9\" (UniqueName: \"kubernetes.io/projected/a2456e5f-1686-4196-b670-9e994a0d694f-kube-api-access-qttc9\") pod \"auto-csr-approver-29568070-hk5lh\" (UID: \"a2456e5f-1686-4196-b670-9e994a0d694f\") " pod="openshift-infra/auto-csr-approver-29568070-hk5lh" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.432625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qttc9\" (UniqueName: \"kubernetes.io/projected/a2456e5f-1686-4196-b670-9e994a0d694f-kube-api-access-qttc9\") pod \"auto-csr-approver-29568070-hk5lh\" (UID: \"a2456e5f-1686-4196-b670-9e994a0d694f\") " pod="openshift-infra/auto-csr-approver-29568070-hk5lh" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.458668 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qttc9\" (UniqueName: \"kubernetes.io/projected/a2456e5f-1686-4196-b670-9e994a0d694f-kube-api-access-qttc9\") pod \"auto-csr-approver-29568070-hk5lh\" (UID: \"a2456e5f-1686-4196-b670-9e994a0d694f\") " pod="openshift-infra/auto-csr-approver-29568070-hk5lh" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.492016 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.721019 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568070-hk5lh"] Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.814923 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="edd87e1d2bb0a42fc79fe0fc8fafbf90d9697e65cfd8fc6baf73b7211d563b23" exitCode=0 Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.814990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"edd87e1d2bb0a42fc79fe0fc8fafbf90d9697e65cfd8fc6baf73b7211d563b23"} Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.815096 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"8decec33670c2842c92261f8dd47259d538fd63496f5f2522fb72de9d5a14bf4"} Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.815117 4932 scope.go:117] "RemoveContainer" containerID="d6d1a99812a9df2e3a0aed95e8032e795b8e981cbae4b19e570cfe3a8c155d8a" Mar 21 09:10:00 crc kubenswrapper[4932]: I0321 09:10:00.816544 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" event={"ID":"a2456e5f-1686-4196-b670-9e994a0d694f","Type":"ContainerStarted","Data":"aeed692831d10948f190247722a3b55babb19a87372458157ee9d8a10738d879"} Mar 21 09:10:01 crc kubenswrapper[4932]: I0321 09:10:01.824842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" event={"ID":"a2456e5f-1686-4196-b670-9e994a0d694f","Type":"ContainerStarted","Data":"be9ecffb1050ac9442a7a7870506ce53c03a1c7bf67b12e486b1f8295e3c449e"} Mar 21 09:10:01 crc kubenswrapper[4932]: I0321 09:10:01.842944 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" podStartSLOduration=1.031115241 podStartE2EDuration="1.842919893s" podCreationTimestamp="2026-03-21 09:10:00 +0000 UTC" firstStartedPulling="2026-03-21 09:10:00.731867177 +0000 UTC m=+704.327065446" lastFinishedPulling="2026-03-21 09:10:01.543671829 +0000 UTC m=+705.138870098" observedRunningTime="2026-03-21 09:10:01.8409858 +0000 UTC m=+705.436184099" watchObservedRunningTime="2026-03-21 09:10:01.842919893 +0000 UTC m=+705.438118192" Mar 21 09:10:02 crc kubenswrapper[4932]: I0321 09:10:02.879489 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2456e5f-1686-4196-b670-9e994a0d694f" containerID="be9ecffb1050ac9442a7a7870506ce53c03a1c7bf67b12e486b1f8295e3c449e" exitCode=0 Mar 21 09:10:02 crc kubenswrapper[4932]: I0321 09:10:02.879620 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" event={"ID":"a2456e5f-1686-4196-b670-9e994a0d694f","Type":"ContainerDied","Data":"be9ecffb1050ac9442a7a7870506ce53c03a1c7bf67b12e486b1f8295e3c449e"} Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.128442 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.293296 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qttc9\" (UniqueName: \"kubernetes.io/projected/a2456e5f-1686-4196-b670-9e994a0d694f-kube-api-access-qttc9\") pod \"a2456e5f-1686-4196-b670-9e994a0d694f\" (UID: \"a2456e5f-1686-4196-b670-9e994a0d694f\") " Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.299584 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2456e5f-1686-4196-b670-9e994a0d694f-kube-api-access-qttc9" (OuterVolumeSpecName: "kube-api-access-qttc9") pod "a2456e5f-1686-4196-b670-9e994a0d694f" (UID: "a2456e5f-1686-4196-b670-9e994a0d694f"). InnerVolumeSpecName "kube-api-access-qttc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.394845 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qttc9\" (UniqueName: \"kubernetes.io/projected/a2456e5f-1686-4196-b670-9e994a0d694f-kube-api-access-qttc9\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.913424 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" event={"ID":"a2456e5f-1686-4196-b670-9e994a0d694f","Type":"ContainerDied","Data":"aeed692831d10948f190247722a3b55babb19a87372458157ee9d8a10738d879"} Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.913463 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeed692831d10948f190247722a3b55babb19a87372458157ee9d8a10738d879" Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.913517 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568070-hk5lh" Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.947239 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568064-58rl6"] Mar 21 09:10:04 crc kubenswrapper[4932]: I0321 09:10:04.951674 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568064-58rl6"] Mar 21 09:10:05 crc kubenswrapper[4932]: I0321 09:10:05.711812 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1438e2-78ca-424b-968d-3a749eac42ec" path="/var/lib/kubelet/pods/3a1438e2-78ca-424b-968d-3a749eac42ec/volumes" Mar 21 09:10:19 crc kubenswrapper[4932]: I0321 09:10:19.552531 4932 scope.go:117] "RemoveContainer" containerID="07f4ac30d27cc766ff13280bf7292f6927e9feb5c63be51a683059ef86f34807" Mar 21 09:10:19 crc kubenswrapper[4932]: I0321 09:10:19.606863 4932 scope.go:117] "RemoveContainer" containerID="c2410365de3b696ccaeb9d3701cf7e1ba78e42c30f98637364db715639fea248" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.595594 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r"] Mar 21 09:10:20 crc kubenswrapper[4932]: E0321 09:10:20.595854 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2456e5f-1686-4196-b670-9e994a0d694f" containerName="oc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.595867 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2456e5f-1686-4196-b670-9e994a0d694f" containerName="oc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.595958 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2456e5f-1686-4196-b670-9e994a0d694f" containerName="oc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.596402 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.600105 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.600403 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ctjl4" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.608193 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-j2plc"] Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.608926 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-j2plc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.610518 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fx7xq" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.618492 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-j2plc"] Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.623771 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r"] Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.629169 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.657111 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fdnwp"] Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.657899 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.661510 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9lhdc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.665307 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fdnwp"] Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.716087 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5869\" (UniqueName: \"kubernetes.io/projected/cab06b9d-4bb0-40e9-93b7-9448b2d47467-kube-api-access-n5869\") pod \"cert-manager-858654f9db-j2plc\" (UID: \"cab06b9d-4bb0-40e9-93b7-9448b2d47467\") " pod="cert-manager/cert-manager-858654f9db-j2plc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.717084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hxv\" (UniqueName: \"kubernetes.io/projected/53b6ef69-81be-4a78-9f72-c0464ac4b003-kube-api-access-h5hxv\") pod \"cert-manager-webhook-687f57d79b-fdnwp\" (UID: \"53b6ef69-81be-4a78-9f72-c0464ac4b003\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.717243 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6zw7\" (UniqueName: \"kubernetes.io/projected/73bfb425-466a-4886-ac74-3fa588f4eb32-kube-api-access-s6zw7\") pod \"cert-manager-cainjector-cf98fcc89-l5m8r\" (UID: \"73bfb425-466a-4886-ac74-3fa588f4eb32\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.819010 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5869\" (UniqueName: \"kubernetes.io/projected/cab06b9d-4bb0-40e9-93b7-9448b2d47467-kube-api-access-n5869\") pod \"cert-manager-858654f9db-j2plc\" (UID: \"cab06b9d-4bb0-40e9-93b7-9448b2d47467\") " pod="cert-manager/cert-manager-858654f9db-j2plc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.819066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5hxv\" (UniqueName: \"kubernetes.io/projected/53b6ef69-81be-4a78-9f72-c0464ac4b003-kube-api-access-h5hxv\") pod \"cert-manager-webhook-687f57d79b-fdnwp\" (UID: \"53b6ef69-81be-4a78-9f72-c0464ac4b003\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.819110 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6zw7\" (UniqueName: \"kubernetes.io/projected/73bfb425-466a-4886-ac74-3fa588f4eb32-kube-api-access-s6zw7\") pod \"cert-manager-cainjector-cf98fcc89-l5m8r\" (UID: \"73bfb425-466a-4886-ac74-3fa588f4eb32\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.839690 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5hxv\" (UniqueName: \"kubernetes.io/projected/53b6ef69-81be-4a78-9f72-c0464ac4b003-kube-api-access-h5hxv\") pod \"cert-manager-webhook-687f57d79b-fdnwp\" (UID: \"53b6ef69-81be-4a78-9f72-c0464ac4b003\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.840022 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5869\" (UniqueName: \"kubernetes.io/projected/cab06b9d-4bb0-40e9-93b7-9448b2d47467-kube-api-access-n5869\") pod \"cert-manager-858654f9db-j2plc\" (UID: \"cab06b9d-4bb0-40e9-93b7-9448b2d47467\") " pod="cert-manager/cert-manager-858654f9db-j2plc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.840831 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6zw7\" (UniqueName: \"kubernetes.io/projected/73bfb425-466a-4886-ac74-3fa588f4eb32-kube-api-access-s6zw7\") pod \"cert-manager-cainjector-cf98fcc89-l5m8r\" (UID: \"73bfb425-466a-4886-ac74-3fa588f4eb32\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.920916 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.930216 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-j2plc" Mar 21 09:10:20 crc kubenswrapper[4932]: I0321 09:10:20.978924 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" Mar 21 09:10:21 crc kubenswrapper[4932]: I0321 09:10:21.164460 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-j2plc"] Mar 21 09:10:21 crc kubenswrapper[4932]: I0321 09:10:21.429155 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r"] Mar 21 09:10:21 crc kubenswrapper[4932]: W0321 09:10:21.430175 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73bfb425_466a_4886_ac74_3fa588f4eb32.slice/crio-1cfd58d94a2716c983b494e75a9e6929802f833152410db0195e2c6030b7621e WatchSource:0}: Error finding container 1cfd58d94a2716c983b494e75a9e6929802f833152410db0195e2c6030b7621e: Status 404 returned error can't find the container with id 1cfd58d94a2716c983b494e75a9e6929802f833152410db0195e2c6030b7621e Mar 21 09:10:21 crc kubenswrapper[4932]: W0321 09:10:21.432489 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53b6ef69_81be_4a78_9f72_c0464ac4b003.slice/crio-33b43cc355f10cc62e61f9208e3ccb4a839d6f1178c525c866d836093d48e96c WatchSource:0}: Error finding container 33b43cc355f10cc62e61f9208e3ccb4a839d6f1178c525c866d836093d48e96c: Status 404 returned error can't find the container with id 33b43cc355f10cc62e61f9208e3ccb4a839d6f1178c525c866d836093d48e96c Mar 21 09:10:21 crc kubenswrapper[4932]: I0321 09:10:21.437288 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fdnwp"] Mar 21 09:10:21 crc kubenswrapper[4932]: I0321 09:10:21.497337 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" event={"ID":"53b6ef69-81be-4a78-9f72-c0464ac4b003","Type":"ContainerStarted","Data":"33b43cc355f10cc62e61f9208e3ccb4a839d6f1178c525c866d836093d48e96c"} Mar 21 09:10:21 crc kubenswrapper[4932]: I0321 09:10:21.498476 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r" event={"ID":"73bfb425-466a-4886-ac74-3fa588f4eb32","Type":"ContainerStarted","Data":"1cfd58d94a2716c983b494e75a9e6929802f833152410db0195e2c6030b7621e"} Mar 21 09:10:21 crc kubenswrapper[4932]: I0321 09:10:21.501162 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-j2plc" event={"ID":"cab06b9d-4bb0-40e9-93b7-9448b2d47467","Type":"ContainerStarted","Data":"1279a52013397a36e194d1dfff24add6925c35014c3551ece3949b26dcd0b8c7"} Mar 21 09:10:25 crc kubenswrapper[4932]: I0321 09:10:25.527469 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r" event={"ID":"73bfb425-466a-4886-ac74-3fa588f4eb32","Type":"ContainerStarted","Data":"04a395d86141a75e99494be2952bb94db8cdb51a9efcb664199e0914685ceb66"} Mar 21 09:10:25 crc kubenswrapper[4932]: I0321 09:10:25.529314 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-j2plc" event={"ID":"cab06b9d-4bb0-40e9-93b7-9448b2d47467","Type":"ContainerStarted","Data":"03c996daaac59d4b879e3eee6042dc904cd6219cb7ca4ad9a1d58f1abdb48d88"} Mar 21 09:10:25 crc kubenswrapper[4932]: I0321 09:10:25.531083 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" event={"ID":"53b6ef69-81be-4a78-9f72-c0464ac4b003","Type":"ContainerStarted","Data":"900cf2d2858f4de6659645a804961f41706d289fadb2d21fa68f9cfc6575be94"} Mar 21 09:10:25 crc kubenswrapper[4932]: I0321 09:10:25.531220 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" Mar 21 09:10:25 crc kubenswrapper[4932]: I0321 09:10:25.547899 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l5m8r" podStartSLOduration=1.991050484 podStartE2EDuration="5.547876432s" podCreationTimestamp="2026-03-21 09:10:20 +0000 UTC" firstStartedPulling="2026-03-21 09:10:21.432335659 +0000 UTC m=+725.027533948" lastFinishedPulling="2026-03-21 09:10:24.989161627 +0000 UTC m=+728.584359896" observedRunningTime="2026-03-21 09:10:25.544303476 +0000 UTC m=+729.139501745" watchObservedRunningTime="2026-03-21 09:10:25.547876432 +0000 UTC m=+729.143074701" Mar 21 09:10:25 crc kubenswrapper[4932]: I0321 09:10:25.563959 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" podStartSLOduration=2.013144853 podStartE2EDuration="5.563934604s" podCreationTimestamp="2026-03-21 09:10:20 +0000 UTC" firstStartedPulling="2026-03-21 09:10:21.43453214 +0000 UTC m=+725.029730429" lastFinishedPulling="2026-03-21 09:10:24.985321911 +0000 UTC m=+728.580520180" observedRunningTime="2026-03-21 09:10:25.561872677 +0000 UTC m=+729.157070956" watchObservedRunningTime="2026-03-21 09:10:25.563934604 +0000 UTC m=+729.159132903" Mar 21 09:10:25 crc kubenswrapper[4932]: I0321 09:10:25.587986 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-j2plc" podStartSLOduration=1.777711909 podStartE2EDuration="5.587955986s" podCreationTimestamp="2026-03-21 09:10:20 +0000 UTC" firstStartedPulling="2026-03-21 09:10:21.17615128 +0000 UTC m=+724.771349559" lastFinishedPulling="2026-03-21 09:10:24.986395367 +0000 UTC m=+728.581593636" observedRunningTime="2026-03-21 09:10:25.58746562 +0000 UTC m=+729.182663899" watchObservedRunningTime="2026-03-21 09:10:25.587955986 +0000 UTC m=+729.183154285" Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.882403 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2zqsw"] Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.883517 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovn-controller" containerID="cri-o://be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03" gracePeriod=30 Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.883629 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="nbdb" containerID="cri-o://fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" gracePeriod=30 Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.883667 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736" gracePeriod=30 Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.883784 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="sbdb" containerID="cri-o://67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" gracePeriod=30 Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.883712 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kube-rbac-proxy-node" containerID="cri-o://23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14" gracePeriod=30 Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.883659 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovn-acl-logging" containerID="cri-o://237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254" gracePeriod=30 Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.883871 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="northd" containerID="cri-o://609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549" gracePeriod=30 Mar 21 09:10:29 crc kubenswrapper[4932]: I0321 09:10:29.924397 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" containerID="cri-o://1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" gracePeriod=30 Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.199493 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b is running failed: container process not found" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.199691 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e is running failed: container process not found" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.199790 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 is running failed: container process not found" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.200298 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 is running failed: container process not found" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.200496 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b is running failed: container process not found" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.200500 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e is running failed: container process not found" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.200642 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 is running failed: container process not found" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.200682 4932 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.200909 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b is running failed: container process not found" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.200987 4932 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="nbdb" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.201013 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e is running failed: container process not found" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.201081 4932 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="sbdb" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.213943 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/3.log" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.216465 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovn-acl-logging/0.log" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.216999 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovn-controller/0.log" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.217757 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284584 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-99l8w"] Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284856 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovn-acl-logging" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284870 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovn-acl-logging" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284881 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284887 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284902 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovn-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284908 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovn-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284917 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="nbdb" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284922 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="nbdb" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284930 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kube-rbac-proxy-node" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284936 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kube-rbac-proxy-node" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284946 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284954 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284962 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="northd" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284967 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="northd" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284976 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284982 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.284990 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.284996 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.285006 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="sbdb" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285013 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="sbdb" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.285021 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kube-rbac-proxy-ovn-metrics" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285027 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kube-rbac-proxy-ovn-metrics" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.285036 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kubecfg-setup" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285043 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kubecfg-setup" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285130 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="northd" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285138 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kube-rbac-proxy-ovn-metrics" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285148 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="kube-rbac-proxy-node" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285156 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovn-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285162 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="nbdb" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285170 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="sbdb" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285179 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285185 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285194 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285202 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovn-acl-logging" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.285310 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285318 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285466 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.285477 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerName="ovnkube-controller" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.287551 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348111 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dtpn\" (UniqueName: \"kubernetes.io/projected/96df7c54-2644-44b4-bcd7-13b82db2ea5d-kube-api-access-8dtpn\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348170 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-ovn\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348192 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-var-lib-openvswitch\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348257 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348259 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348363 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-slash\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348400 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-slash" (OuterVolumeSpecName: "host-slash") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-config\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348678 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovn-node-metrics-cert\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348697 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-systemd\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348721 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-netd\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348742 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-env-overrides\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348758 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-netns\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348772 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-log-socket\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348786 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-node-log\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348803 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-script-lib\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348819 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348833 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-openvswitch\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348855 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-systemd-units\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348871 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-bin\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348888 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-ovn-kubernetes\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348906 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-kubelet\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348920 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-etc-openvswitch\") pod \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\" (UID: \"96df7c54-2644-44b4-bcd7-13b82db2ea5d\") " Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.348982 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovnkube-script-lib\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349001 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-kubelet\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349018 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-slash\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349039 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349056 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349072 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-env-overrides\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349088 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-node-log\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349102 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-run-ovn-kubernetes\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349116 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349131 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-cni-netd\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349149 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdh8\" (UniqueName: \"kubernetes.io/projected/93e91877-bcc4-49e4-aedc-d7af75d05e9c-kube-api-access-fhdh8\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349167 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-cni-bin\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349185 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-run-netns\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovnkube-config\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349225 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-systemd\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349243 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-etc-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349262 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-systemd-units\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349278 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-var-lib-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349297 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovn-node-metrics-cert\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349311 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-ovn\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349327 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-log-socket\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349382 4932 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349393 4932 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349402 4932 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349443 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349458 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349468 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349486 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349504 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349521 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349539 4932 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-slash\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349616 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349633 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349650 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-log-socket" (OuterVolumeSpecName: "log-socket") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349671 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-node-log" (OuterVolumeSpecName: "node-log") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.349906 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.350137 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.350288 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.353554 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.353603 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96df7c54-2644-44b4-bcd7-13b82db2ea5d-kube-api-access-8dtpn" (OuterVolumeSpecName: "kube-api-access-8dtpn") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "kube-api-access-8dtpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.361655 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "96df7c54-2644-44b4-bcd7-13b82db2ea5d" (UID: "96df7c54-2644-44b4-bcd7-13b82db2ea5d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450182 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-log-socket\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450227 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovnkube-script-lib\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450244 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-slash\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450259 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-kubelet\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450281 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450298 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-env-overrides\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450315 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-node-log\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450359 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-run-ovn-kubernetes\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450359 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-log-socket\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450388 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-kubelet\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450405 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-slash\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450416 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-cni-netd\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450395 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-cni-netd\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450437 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-node-log\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450442 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhdh8\" (UniqueName: \"kubernetes.io/projected/93e91877-bcc4-49e4-aedc-d7af75d05e9c-kube-api-access-fhdh8\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-cni-bin\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450441 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450505 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450560 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-run-ovn-kubernetes\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450536 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-cni-bin\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450499 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-run-netns\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450619 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovnkube-config\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450652 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-systemd\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450679 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-etc-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450518 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-host-run-netns\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450706 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-systemd\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450710 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-systemd-units\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450742 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-etc-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450767 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-systemd-units\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450819 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-var-lib-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450842 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovn-node-metrics-cert\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450863 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-ovn\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450913 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450928 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450942 4932 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450954 4932 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450965 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450977 4932 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450988 4932 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-log-socket\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.450998 4932 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-node-log\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451008 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96df7c54-2644-44b4-bcd7-13b82db2ea5d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451014 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-env-overrides\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451021 4932 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451006 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-run-ovn\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451012 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93e91877-bcc4-49e4-aedc-d7af75d05e9c-var-lib-openvswitch\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451049 4932 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451159 4932 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451173 4932 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451191 4932 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451206 4932 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96df7c54-2644-44b4-bcd7-13b82db2ea5d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451224 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dtpn\" (UniqueName: \"kubernetes.io/projected/96df7c54-2644-44b4-bcd7-13b82db2ea5d-kube-api-access-8dtpn\") on node \"crc\" DevicePath \"\"" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451465 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovnkube-config\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.451546 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovnkube-script-lib\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.453995 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93e91877-bcc4-49e4-aedc-d7af75d05e9c-ovn-node-metrics-cert\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.466967 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhdh8\" (UniqueName: \"kubernetes.io/projected/93e91877-bcc4-49e4-aedc-d7af75d05e9c-kube-api-access-fhdh8\") pod \"ovnkube-node-99l8w\" (UID: \"93e91877-bcc4-49e4-aedc-d7af75d05e9c\") " pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.563549 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/2.log" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.564024 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/1.log" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.564079 4932 generic.go:334] "Generic (PLEG): container finished" podID="a038ce15-d375-452d-b38f-6893df65dee4" containerID="e15136e2e2b85a0626ee0e6b82b18a1a2feb09d9addb024999f8b5a7d32e367e" exitCode=2 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.564148 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jmd8j" event={"ID":"a038ce15-d375-452d-b38f-6893df65dee4","Type":"ContainerDied","Data":"e15136e2e2b85a0626ee0e6b82b18a1a2feb09d9addb024999f8b5a7d32e367e"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.564198 4932 scope.go:117] "RemoveContainer" containerID="8570ad8e297e9a866bc96383961f7815c756bff8c8a926dc9b2c0203ab0bcb6a" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.564842 4932 scope.go:117] "RemoveContainer" containerID="e15136e2e2b85a0626ee0e6b82b18a1a2feb09d9addb024999f8b5a7d32e367e" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.565163 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jmd8j_openshift-multus(a038ce15-d375-452d-b38f-6893df65dee4)\"" pod="openshift-multus/multus-jmd8j" podUID="a038ce15-d375-452d-b38f-6893df65dee4" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.568405 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovnkube-controller/3.log" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.572059 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovn-acl-logging/0.log" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.572650 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2zqsw_96df7c54-2644-44b4-bcd7-13b82db2ea5d/ovn-controller/0.log" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575744 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" exitCode=0 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575778 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" exitCode=0 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575792 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" exitCode=0 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575810 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549" exitCode=0 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575823 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736" exitCode=0 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575836 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14" exitCode=0 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575847 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254" exitCode=143 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575859 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575860 4932 generic.go:334] "Generic (PLEG): container finished" podID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" containerID="be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03" exitCode=143 Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.575789 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576039 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576072 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576092 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576116 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576136 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576155 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576186 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576197 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576208 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576219 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576231 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576241 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576251 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576263 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576272 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576286 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576302 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576314 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576323 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576333 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576415 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576427 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576437 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576447 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576458 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576467 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576496 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576514 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576530 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576540 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576550 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576560 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576569 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576579 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576588 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576598 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576607 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576622 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2zqsw" event={"ID":"96df7c54-2644-44b4-bcd7-13b82db2ea5d","Type":"ContainerDied","Data":"847b43ed64f507d07f3bd50641cf592661d92a27233adf43da7444c1388a4b9d"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576638 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576650 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576660 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576670 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576679 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576689 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576699 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576708 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576717 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.576727 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.603325 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.604295 4932 scope.go:117] "RemoveContainer" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.625600 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2zqsw"] Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.630706 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2zqsw"] Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.643300 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.670525 4932 scope.go:117] "RemoveContainer" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.688528 4932 scope.go:117] "RemoveContainer" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.703766 4932 scope.go:117] "RemoveContainer" containerID="609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.739068 4932 scope.go:117] "RemoveContainer" containerID="dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.757324 4932 scope.go:117] "RemoveContainer" containerID="23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.774152 4932 scope.go:117] "RemoveContainer" containerID="237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.819382 4932 scope.go:117] "RemoveContainer" containerID="be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.833169 4932 scope.go:117] "RemoveContainer" containerID="29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.849319 4932 scope.go:117] "RemoveContainer" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.849764 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": container with ID starting with 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 not found: ID does not exist" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.849794 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} err="failed to get container status \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": rpc error: code = NotFound desc = could not find container \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": container with ID starting with 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.849820 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.850075 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": container with ID starting with ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4 not found: ID does not exist" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.850101 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} err="failed to get container status \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": rpc error: code = NotFound desc = could not find container \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": container with ID starting with ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.850121 4932 scope.go:117] "RemoveContainer" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.850405 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": container with ID starting with 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e not found: ID does not exist" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.850443 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} err="failed to get container status \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": rpc error: code = NotFound desc = could not find container \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": container with ID starting with 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.850462 4932 scope.go:117] "RemoveContainer" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.850752 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": container with ID starting with fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b not found: ID does not exist" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.850779 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} err="failed to get container status \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": rpc error: code = NotFound desc = could not find container \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": container with ID starting with fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.850798 4932 scope.go:117] "RemoveContainer" containerID="609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.851051 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": container with ID starting with 609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549 not found: ID does not exist" containerID="609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.851072 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} err="failed to get container status \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": rpc error: code = NotFound desc = could not find container \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": container with ID starting with 609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.851087 4932 scope.go:117] "RemoveContainer" containerID="dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.852007 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": container with ID starting with dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736 not found: ID does not exist" containerID="dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.852064 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} err="failed to get container status \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": rpc error: code = NotFound desc = could not find container \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": container with ID starting with dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.852103 4932 scope.go:117] "RemoveContainer" containerID="23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.852399 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": container with ID starting with 23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14 not found: ID does not exist" containerID="23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.852428 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} err="failed to get container status \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": rpc error: code = NotFound desc = could not find container \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": container with ID starting with 23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.852442 4932 scope.go:117] "RemoveContainer" containerID="237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.852728 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": container with ID starting with 237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254 not found: ID does not exist" containerID="237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.852800 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} err="failed to get container status \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": rpc error: code = NotFound desc = could not find container \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": container with ID starting with 237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.852823 4932 scope.go:117] "RemoveContainer" containerID="be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.853106 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": container with ID starting with be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03 not found: ID does not exist" containerID="be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.853132 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} err="failed to get container status \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": rpc error: code = NotFound desc = could not find container \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": container with ID starting with be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.853149 4932 scope.go:117] "RemoveContainer" containerID="29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c" Mar 21 09:10:30 crc kubenswrapper[4932]: E0321 09:10:30.853461 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": container with ID starting with 29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c not found: ID does not exist" containerID="29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.853499 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} err="failed to get container status \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": rpc error: code = NotFound desc = could not find container \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": container with ID starting with 29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.853530 4932 scope.go:117] "RemoveContainer" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.853771 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} err="failed to get container status \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": rpc error: code = NotFound desc = could not find container \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": container with ID starting with 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.853793 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.854133 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} err="failed to get container status \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": rpc error: code = NotFound desc = could not find container \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": container with ID starting with ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.854154 4932 scope.go:117] "RemoveContainer" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.854430 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} err="failed to get container status \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": rpc error: code = NotFound desc = could not find container \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": container with ID starting with 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.854447 4932 scope.go:117] "RemoveContainer" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.854665 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} err="failed to get container status \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": rpc error: code = NotFound desc = could not find container \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": container with ID starting with fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.854684 4932 scope.go:117] "RemoveContainer" containerID="609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.854929 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} err="failed to get container status \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": rpc error: code = NotFound desc = could not find container \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": container with ID starting with 609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.854961 4932 scope.go:117] "RemoveContainer" containerID="dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.855192 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} err="failed to get container status \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": rpc error: code = NotFound desc = could not find container \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": container with ID starting with dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.855216 4932 scope.go:117] "RemoveContainer" containerID="23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.855501 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} err="failed to get container status \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": rpc error: code = NotFound desc = could not find container \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": container with ID starting with 23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.855522 4932 scope.go:117] "RemoveContainer" containerID="237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.855780 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} err="failed to get container status \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": rpc error: code = NotFound desc = could not find container \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": container with ID starting with 237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.855804 4932 scope.go:117] "RemoveContainer" containerID="be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.856300 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} err="failed to get container status \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": rpc error: code = NotFound desc = could not find container \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": container with ID starting with be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.856320 4932 scope.go:117] "RemoveContainer" containerID="29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.856657 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} err="failed to get container status \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": rpc error: code = NotFound desc = could not find container \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": container with ID starting with 29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.856681 4932 scope.go:117] "RemoveContainer" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.857078 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} err="failed to get container status \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": rpc error: code = NotFound desc = could not find container \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": container with ID starting with 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.857110 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.857513 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} err="failed to get container status \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": rpc error: code = NotFound desc = could not find container \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": container with ID starting with ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.857534 4932 scope.go:117] "RemoveContainer" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.857842 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} err="failed to get container status \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": rpc error: code = NotFound desc = could not find container \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": container with ID starting with 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.857871 4932 scope.go:117] "RemoveContainer" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.858199 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} err="failed to get container status \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": rpc error: code = NotFound desc = could not find container \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": container with ID starting with fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.858221 4932 scope.go:117] "RemoveContainer" containerID="609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.858583 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} err="failed to get container status \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": rpc error: code = NotFound desc = could not find container \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": container with ID starting with 609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.858605 4932 scope.go:117] "RemoveContainer" containerID="dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.858857 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} err="failed to get container status \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": rpc error: code = NotFound desc = could not find container \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": container with ID starting with dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.858877 4932 scope.go:117] "RemoveContainer" containerID="23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.859097 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} err="failed to get container status \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": rpc error: code = NotFound desc = could not find container \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": container with ID starting with 23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.859114 4932 scope.go:117] "RemoveContainer" containerID="237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.859455 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} err="failed to get container status \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": rpc error: code = NotFound desc = could not find container \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": container with ID starting with 237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.859473 4932 scope.go:117] "RemoveContainer" containerID="be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.859705 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} err="failed to get container status \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": rpc error: code = NotFound desc = could not find container \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": container with ID starting with be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.859721 4932 scope.go:117] "RemoveContainer" containerID="29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.860088 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} err="failed to get container status \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": rpc error: code = NotFound desc = could not find container \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": container with ID starting with 29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.860114 4932 scope.go:117] "RemoveContainer" containerID="1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.860343 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784"} err="failed to get container status \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": rpc error: code = NotFound desc = could not find container \"1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784\": container with ID starting with 1bfd295290a5cad62e43af01d50b3d8c08441b8ef446e5cdf909b1ebebee1784 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.860387 4932 scope.go:117] "RemoveContainer" containerID="ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.860649 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4"} err="failed to get container status \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": rpc error: code = NotFound desc = could not find container \"ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4\": container with ID starting with ee3eb8a6c1f6c080ddc3eeea2847af0b333c933325965a5048eb3b856672c1d4 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.860686 4932 scope.go:117] "RemoveContainer" containerID="67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.860939 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e"} err="failed to get container status \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": rpc error: code = NotFound desc = could not find container \"67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e\": container with ID starting with 67a47446d09199033e6b3a69e0082c8ae1c6c870f7b5960a7acf3fb79501e05e not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.860957 4932 scope.go:117] "RemoveContainer" containerID="fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.861185 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b"} err="failed to get container status \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": rpc error: code = NotFound desc = could not find container \"fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b\": container with ID starting with fb7ab43d621a98f0209dd3227f9041c2bf26f7c67e38204d600245319e15113b not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.861207 4932 scope.go:117] "RemoveContainer" containerID="609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.861437 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549"} err="failed to get container status \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": rpc error: code = NotFound desc = could not find container \"609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549\": container with ID starting with 609410ec6d83d1145dc138975ea708a9e4e6ff94b214d74f71991aca4d3ed549 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.861455 4932 scope.go:117] "RemoveContainer" containerID="dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.863470 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736"} err="failed to get container status \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": rpc error: code = NotFound desc = could not find container \"dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736\": container with ID starting with dd7428a3f659f8eabbb7dac4dca2039d0311187538a99dc3cb59dac7ac580736 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.863495 4932 scope.go:117] "RemoveContainer" containerID="23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.863827 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14"} err="failed to get container status \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": rpc error: code = NotFound desc = could not find container \"23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14\": container with ID starting with 23b547a256f5172e1e51fc405fae62b0c4b4e313b0a7494dbe588682882a9c14 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.863856 4932 scope.go:117] "RemoveContainer" containerID="237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.864113 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254"} err="failed to get container status \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": rpc error: code = NotFound desc = could not find container \"237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254\": container with ID starting with 237000c434e8a4876cdb73e5861e95aeb2ecd488d3b1e6cc889e5e5ec2224254 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.864153 4932 scope.go:117] "RemoveContainer" containerID="be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.864397 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03"} err="failed to get container status \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": rpc error: code = NotFound desc = could not find container \"be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03\": container with ID starting with be9e9b5d7f8449e30943e6cb7e9ff3325e1cb628cdf52a51c27e2f0d98a2ff03 not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.864420 4932 scope.go:117] "RemoveContainer" containerID="29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.864723 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c"} err="failed to get container status \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": rpc error: code = NotFound desc = could not find container \"29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c\": container with ID starting with 29d108f626039f85e7fd0b10d89684d72c24a94e95af62843d7efa331705738c not found: ID does not exist" Mar 21 09:10:30 crc kubenswrapper[4932]: I0321 09:10:30.981774 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" Mar 21 09:10:31 crc kubenswrapper[4932]: I0321 09:10:31.586289 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/2.log" Mar 21 09:10:31 crc kubenswrapper[4932]: I0321 09:10:31.589550 4932 generic.go:334] "Generic (PLEG): container finished" podID="93e91877-bcc4-49e4-aedc-d7af75d05e9c" containerID="09a63bd9980f5d02b48337948fa4ac0d5cc8f89181acb902c189eea9707f07ad" exitCode=0 Mar 21 09:10:31 crc kubenswrapper[4932]: I0321 09:10:31.589656 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerDied","Data":"09a63bd9980f5d02b48337948fa4ac0d5cc8f89181acb902c189eea9707f07ad"} Mar 21 09:10:31 crc kubenswrapper[4932]: I0321 09:10:31.589769 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"5b235e8be4738385410146b7ddf6ff9b3e882d7492872f3b28076e68b1bf28c4"} Mar 21 09:10:31 crc kubenswrapper[4932]: I0321 09:10:31.710200 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96df7c54-2644-44b4-bcd7-13b82db2ea5d" path="/var/lib/kubelet/pods/96df7c54-2644-44b4-bcd7-13b82db2ea5d/volumes" Mar 21 09:10:32 crc kubenswrapper[4932]: I0321 09:10:32.602791 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"f8693a6811d1be342afa507935fd1997f997823c62acac4d7ec7d68a5864b5a9"} Mar 21 09:10:32 crc kubenswrapper[4932]: I0321 09:10:32.602850 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"410244a4af82d881e1b15402a03d1b1806ca60fdee134876a7278b386658728a"} Mar 21 09:10:32 crc kubenswrapper[4932]: I0321 09:10:32.602871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"bfa4102d9afcae7b2a33908717d0d87efb7af4a478d06700e24f5f8f0b50baf9"} Mar 21 09:10:32 crc kubenswrapper[4932]: I0321 09:10:32.602907 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"f9527b5eda41203d9da290e47829baff5064476cdd69b5973122f06c4bf565a8"} Mar 21 09:10:32 crc kubenswrapper[4932]: I0321 09:10:32.602925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"09d390e6de5334f7dd163e9e8d26c7232f4a6a85d750fae01fecd412e773bd64"} Mar 21 09:10:32 crc kubenswrapper[4932]: I0321 09:10:32.602943 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"ec7b2b6fd7eafcab913e97fc24a435a8768308c0fc53a107e27b3fe2bd3d4f7c"} Mar 21 09:10:35 crc kubenswrapper[4932]: I0321 09:10:35.632137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"449face9fd6b7ec25011b04731313fb0b5abb40959d57717ff179d180325b85f"} Mar 21 09:10:37 crc kubenswrapper[4932]: I0321 09:10:37.654258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" event={"ID":"93e91877-bcc4-49e4-aedc-d7af75d05e9c","Type":"ContainerStarted","Data":"bdaade2afcbb61595197633380e744aed5dc774078ce759b43f027dbf4705391"} Mar 21 09:10:37 crc kubenswrapper[4932]: I0321 09:10:37.656529 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:37 crc kubenswrapper[4932]: I0321 09:10:37.656553 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:37 crc kubenswrapper[4932]: I0321 09:10:37.694610 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" podStartSLOduration=7.694576939 podStartE2EDuration="7.694576939s" podCreationTimestamp="2026-03-21 09:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:10:37.688337957 +0000 UTC m=+741.283536246" watchObservedRunningTime="2026-03-21 09:10:37.694576939 +0000 UTC m=+741.289775238" Mar 21 09:10:37 crc kubenswrapper[4932]: I0321 09:10:37.710383 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:38 crc kubenswrapper[4932]: I0321 09:10:38.661038 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:38 crc kubenswrapper[4932]: I0321 09:10:38.688423 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:10:42 crc kubenswrapper[4932]: I0321 09:10:42.703445 4932 scope.go:117] "RemoveContainer" containerID="e15136e2e2b85a0626ee0e6b82b18a1a2feb09d9addb024999f8b5a7d32e367e" Mar 21 09:10:42 crc kubenswrapper[4932]: E0321 09:10:42.704807 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jmd8j_openshift-multus(a038ce15-d375-452d-b38f-6893df65dee4)\"" pod="openshift-multus/multus-jmd8j" podUID="a038ce15-d375-452d-b38f-6893df65dee4" Mar 21 09:10:56 crc kubenswrapper[4932]: I0321 09:10:56.900844 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc"] Mar 21 09:10:56 crc kubenswrapper[4932]: I0321 09:10:56.902568 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:56 crc kubenswrapper[4932]: I0321 09:10:56.904322 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 21 09:10:56 crc kubenswrapper[4932]: I0321 09:10:56.912434 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc"] Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.014163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzln2\" (UniqueName: \"kubernetes.io/projected/02cf18c7-ac8d-4afb-9594-ea0675338c9a-kube-api-access-dzln2\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.014222 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.014260 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.115962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzln2\" (UniqueName: \"kubernetes.io/projected/02cf18c7-ac8d-4afb-9594-ea0675338c9a-kube-api-access-dzln2\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.116048 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.116106 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.116676 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.116776 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.135659 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzln2\" (UniqueName: \"kubernetes.io/projected/02cf18c7-ac8d-4afb-9594-ea0675338c9a-kube-api-access-dzln2\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.223522 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: E0321 09:10:57.250014 4932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace_02cf18c7-ac8d-4afb-9594-ea0675338c9a_0(22c9c4b95ccb6caae265f31c4ba92984576e9ecfed300ea623f8a8306db5e7d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:10:57 crc kubenswrapper[4932]: E0321 09:10:57.250592 4932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace_02cf18c7-ac8d-4afb-9594-ea0675338c9a_0(22c9c4b95ccb6caae265f31c4ba92984576e9ecfed300ea623f8a8306db5e7d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: E0321 09:10:57.250769 4932 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace_02cf18c7-ac8d-4afb-9594-ea0675338c9a_0(22c9c4b95ccb6caae265f31c4ba92984576e9ecfed300ea623f8a8306db5e7d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: E0321 09:10:57.250953 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace(02cf18c7-ac8d-4afb-9594-ea0675338c9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace(02cf18c7-ac8d-4afb-9594-ea0675338c9a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace_02cf18c7-ac8d-4afb-9594-ea0675338c9a_0(22c9c4b95ccb6caae265f31c4ba92984576e9ecfed300ea623f8a8306db5e7d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.710156 4932 scope.go:117] "RemoveContainer" containerID="e15136e2e2b85a0626ee0e6b82b18a1a2feb09d9addb024999f8b5a7d32e367e" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.794685 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: I0321 09:10:57.795484 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: E0321 09:10:57.825499 4932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace_02cf18c7-ac8d-4afb-9594-ea0675338c9a_0(2129ccbb11ba3749d08da021f261c3a44883a590ecaba99f0437d36f485ad7d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 09:10:57 crc kubenswrapper[4932]: E0321 09:10:57.825577 4932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace_02cf18c7-ac8d-4afb-9594-ea0675338c9a_0(2129ccbb11ba3749d08da021f261c3a44883a590ecaba99f0437d36f485ad7d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: E0321 09:10:57.825606 4932 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace_02cf18c7-ac8d-4afb-9594-ea0675338c9a_0(2129ccbb11ba3749d08da021f261c3a44883a590ecaba99f0437d36f485ad7d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:10:57 crc kubenswrapper[4932]: E0321 09:10:57.825693 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace(02cf18c7-ac8d-4afb-9594-ea0675338c9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace(02cf18c7-ac8d-4afb-9594-ea0675338c9a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_openshift-marketplace_02cf18c7-ac8d-4afb-9594-ea0675338c9a_0(2129ccbb11ba3749d08da021f261c3a44883a590ecaba99f0437d36f485ad7d8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" Mar 21 09:10:58 crc kubenswrapper[4932]: I0321 09:10:58.802700 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jmd8j_a038ce15-d375-452d-b38f-6893df65dee4/kube-multus/2.log" Mar 21 09:10:58 crc kubenswrapper[4932]: I0321 09:10:58.802755 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jmd8j" event={"ID":"a038ce15-d375-452d-b38f-6893df65dee4","Type":"ContainerStarted","Data":"3ba53bef6c1df96790185e4016da75b143ffcf6c10c3ce276c722f30d83a988f"} Mar 21 09:11:00 crc kubenswrapper[4932]: I0321 09:11:00.631434 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-99l8w" Mar 21 09:11:11 crc kubenswrapper[4932]: I0321 09:11:11.702548 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:11:11 crc kubenswrapper[4932]: I0321 09:11:11.703556 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:11:11 crc kubenswrapper[4932]: I0321 09:11:11.974752 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc"] Mar 21 09:11:12 crc kubenswrapper[4932]: I0321 09:11:12.909435 4932 generic.go:334] "Generic (PLEG): container finished" podID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerID="190372aa3d7188c320b1d9b7bc1d4da15511b60918553c082eef6982ce6e7444" exitCode=0 Mar 21 09:11:12 crc kubenswrapper[4932]: I0321 09:11:12.909539 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" event={"ID":"02cf18c7-ac8d-4afb-9594-ea0675338c9a","Type":"ContainerDied","Data":"190372aa3d7188c320b1d9b7bc1d4da15511b60918553c082eef6982ce6e7444"} Mar 21 09:11:12 crc kubenswrapper[4932]: I0321 09:11:12.909792 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" event={"ID":"02cf18c7-ac8d-4afb-9594-ea0675338c9a","Type":"ContainerStarted","Data":"a6710d4bf63165092796a30df9e2642fa43d3eaf9295cb775f6c070351846dc0"} Mar 21 09:11:14 crc kubenswrapper[4932]: I0321 09:11:14.945721 4932 generic.go:334] "Generic (PLEG): container finished" podID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerID="3097a432023e440d1250eb593a79cc1ac318e1c3c4ea353b804b8d73788d5d9b" exitCode=0 Mar 21 09:11:14 crc kubenswrapper[4932]: I0321 09:11:14.945885 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" event={"ID":"02cf18c7-ac8d-4afb-9594-ea0675338c9a","Type":"ContainerDied","Data":"3097a432023e440d1250eb593a79cc1ac318e1c3c4ea353b804b8d73788d5d9b"} Mar 21 09:11:15 crc kubenswrapper[4932]: I0321 09:11:15.956795 4932 generic.go:334] "Generic (PLEG): container finished" podID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerID="f542b5e2035015043fbf6594acebcc248affa92a5638fb818542f4c5ad63db41" exitCode=0 Mar 21 09:11:15 crc kubenswrapper[4932]: I0321 09:11:15.956839 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" event={"ID":"02cf18c7-ac8d-4afb-9594-ea0675338c9a","Type":"ContainerDied","Data":"f542b5e2035015043fbf6594acebcc248affa92a5638fb818542f4c5ad63db41"} Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.181748 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.327094 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzln2\" (UniqueName: \"kubernetes.io/projected/02cf18c7-ac8d-4afb-9594-ea0675338c9a-kube-api-access-dzln2\") pod \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.327162 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-util\") pod \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.327245 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-bundle\") pod \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\" (UID: \"02cf18c7-ac8d-4afb-9594-ea0675338c9a\") " Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.330053 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-bundle" (OuterVolumeSpecName: "bundle") pod "02cf18c7-ac8d-4afb-9594-ea0675338c9a" (UID: "02cf18c7-ac8d-4afb-9594-ea0675338c9a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.332842 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02cf18c7-ac8d-4afb-9594-ea0675338c9a-kube-api-access-dzln2" (OuterVolumeSpecName: "kube-api-access-dzln2") pod "02cf18c7-ac8d-4afb-9594-ea0675338c9a" (UID: "02cf18c7-ac8d-4afb-9594-ea0675338c9a"). InnerVolumeSpecName "kube-api-access-dzln2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.340487 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-util" (OuterVolumeSpecName: "util") pod "02cf18c7-ac8d-4afb-9594-ea0675338c9a" (UID: "02cf18c7-ac8d-4afb-9594-ea0675338c9a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.429131 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzln2\" (UniqueName: \"kubernetes.io/projected/02cf18c7-ac8d-4afb-9594-ea0675338c9a-kube-api-access-dzln2\") on node \"crc\" DevicePath \"\"" Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.429175 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-util\") on node \"crc\" DevicePath \"\"" Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.429186 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02cf18c7-ac8d-4afb-9594-ea0675338c9a-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.970696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" event={"ID":"02cf18c7-ac8d-4afb-9594-ea0675338c9a","Type":"ContainerDied","Data":"a6710d4bf63165092796a30df9e2642fa43d3eaf9295cb775f6c070351846dc0"} Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.971025 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6710d4bf63165092796a30df9e2642fa43d3eaf9295cb775f6c070351846dc0" Mar 21 09:11:17 crc kubenswrapper[4932]: I0321 09:11:17.970750 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc" Mar 21 09:11:19 crc kubenswrapper[4932]: I0321 09:11:19.671787 4932 scope.go:117] "RemoveContainer" containerID="593d6b66dde78bbbe1337a6374efe4a9cade7527802ccbaf27ce2d7f4051bc2a" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.858910 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-wchrv"] Mar 21 09:11:30 crc kubenswrapper[4932]: E0321 09:11:30.859803 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerName="util" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.859819 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerName="util" Mar 21 09:11:30 crc kubenswrapper[4932]: E0321 09:11:30.859831 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerName="pull" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.859838 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerName="pull" Mar 21 09:11:30 crc kubenswrapper[4932]: E0321 09:11:30.859853 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerName="extract" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.859861 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerName="extract" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.859987 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="02cf18c7-ac8d-4afb-9594-ea0675338c9a" containerName="extract" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.860479 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wchrv" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.862824 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-xk2tm" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.863940 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.864697 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.875725 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-wchrv"] Mar 21 09:11:30 crc kubenswrapper[4932]: I0321 09:11:30.903841 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47zmc\" (UniqueName: \"kubernetes.io/projected/50126757-824d-4f6b-8427-1cf4299adc5c-kube-api-access-47zmc\") pod \"obo-prometheus-operator-8ff7d675-wchrv\" (UID: \"50126757-824d-4f6b-8427-1cf4299adc5c\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-wchrv" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.004808 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47zmc\" (UniqueName: \"kubernetes.io/projected/50126757-824d-4f6b-8427-1cf4299adc5c-kube-api-access-47zmc\") pod \"obo-prometheus-operator-8ff7d675-wchrv\" (UID: \"50126757-824d-4f6b-8427-1cf4299adc5c\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-wchrv" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.040759 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47zmc\" (UniqueName: \"kubernetes.io/projected/50126757-824d-4f6b-8427-1cf4299adc5c-kube-api-access-47zmc\") pod \"obo-prometheus-operator-8ff7d675-wchrv\" (UID: \"50126757-824d-4f6b-8427-1cf4299adc5c\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-wchrv" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.166787 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd"] Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.167734 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.170376 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-m96ps" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.170476 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.179980 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wchrv" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.181946 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58"] Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.182844 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.201242 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd"] Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.239480 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58"] Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.312953 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e3f6352-4f16-48d4-b2d4-aaf334ba7521-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-dcxqd\" (UID: \"8e3f6352-4f16-48d4-b2d4-aaf334ba7521\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.313394 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3553f81f-1424-4c1b-8c98-b05f0e2103c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-rlk58\" (UID: \"3553f81f-1424-4c1b-8c98-b05f0e2103c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.313427 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3553f81f-1424-4c1b-8c98-b05f0e2103c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-rlk58\" (UID: \"3553f81f-1424-4c1b-8c98-b05f0e2103c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.313447 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e3f6352-4f16-48d4-b2d4-aaf334ba7521-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-dcxqd\" (UID: \"8e3f6352-4f16-48d4-b2d4-aaf334ba7521\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.413750 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e3f6352-4f16-48d4-b2d4-aaf334ba7521-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-dcxqd\" (UID: \"8e3f6352-4f16-48d4-b2d4-aaf334ba7521\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.413828 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3553f81f-1424-4c1b-8c98-b05f0e2103c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-rlk58\" (UID: \"3553f81f-1424-4c1b-8c98-b05f0e2103c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.413860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3553f81f-1424-4c1b-8c98-b05f0e2103c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-rlk58\" (UID: \"3553f81f-1424-4c1b-8c98-b05f0e2103c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.413880 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e3f6352-4f16-48d4-b2d4-aaf334ba7521-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-dcxqd\" (UID: \"8e3f6352-4f16-48d4-b2d4-aaf334ba7521\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.422938 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3553f81f-1424-4c1b-8c98-b05f0e2103c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-rlk58\" (UID: \"3553f81f-1424-4c1b-8c98-b05f0e2103c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.424795 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e3f6352-4f16-48d4-b2d4-aaf334ba7521-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-dcxqd\" (UID: \"8e3f6352-4f16-48d4-b2d4-aaf334ba7521\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.435796 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3553f81f-1424-4c1b-8c98-b05f0e2103c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-rlk58\" (UID: \"3553f81f-1424-4c1b-8c98-b05f0e2103c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.448942 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-wchrv"] Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.457587 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e3f6352-4f16-48d4-b2d4-aaf334ba7521-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-974755686-dcxqd\" (UID: \"8e3f6352-4f16-48d4-b2d4-aaf334ba7521\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.484826 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.533779 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.746582 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-dfktt"] Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.747448 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.751151 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-s4vxz" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.757987 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.762417 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-dfktt"] Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.810546 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd"] Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.847438 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58"] Mar 21 09:11:31 crc kubenswrapper[4932]: W0321 09:11:31.855492 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3553f81f_1424_4c1b_8c98_b05f0e2103c4.slice/crio-2b1fd9a083d99ac570b9b1514689ca34cbadfdea91420900ff29f6b31bb1d228 WatchSource:0}: Error finding container 2b1fd9a083d99ac570b9b1514689ca34cbadfdea91420900ff29f6b31bb1d228: Status 404 returned error can't find the container with id 2b1fd9a083d99ac570b9b1514689ca34cbadfdea91420900ff29f6b31bb1d228 Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.922189 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcsq2\" (UniqueName: \"kubernetes.io/projected/e09365ab-596d-4dcf-b3d1-5bba08ab9f43-kube-api-access-hcsq2\") pod \"observability-operator-6dd7dd855f-dfktt\" (UID: \"e09365ab-596d-4dcf-b3d1-5bba08ab9f43\") " pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:31 crc kubenswrapper[4932]: I0321 09:11:31.922341 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e09365ab-596d-4dcf-b3d1-5bba08ab9f43-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-dfktt\" (UID: \"e09365ab-596d-4dcf-b3d1-5bba08ab9f43\") " pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.022916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcsq2\" (UniqueName: \"kubernetes.io/projected/e09365ab-596d-4dcf-b3d1-5bba08ab9f43-kube-api-access-hcsq2\") pod \"observability-operator-6dd7dd855f-dfktt\" (UID: \"e09365ab-596d-4dcf-b3d1-5bba08ab9f43\") " pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.022975 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e09365ab-596d-4dcf-b3d1-5bba08ab9f43-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-dfktt\" (UID: \"e09365ab-596d-4dcf-b3d1-5bba08ab9f43\") " pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.028203 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e09365ab-596d-4dcf-b3d1-5bba08ab9f43-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-dfktt\" (UID: \"e09365ab-596d-4dcf-b3d1-5bba08ab9f43\") " pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.039360 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcsq2\" (UniqueName: \"kubernetes.io/projected/e09365ab-596d-4dcf-b3d1-5bba08ab9f43-kube-api-access-hcsq2\") pod \"observability-operator-6dd7dd855f-dfktt\" (UID: \"e09365ab-596d-4dcf-b3d1-5bba08ab9f43\") " pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.052575 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" event={"ID":"3553f81f-1424-4c1b-8c98-b05f0e2103c4","Type":"ContainerStarted","Data":"2b1fd9a083d99ac570b9b1514689ca34cbadfdea91420900ff29f6b31bb1d228"} Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.053851 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wchrv" event={"ID":"50126757-824d-4f6b-8427-1cf4299adc5c","Type":"ContainerStarted","Data":"d759e8131a555b43a3367408cc4a38b2551b67a9ebca93bd07279b0459afd45a"} Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.055092 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" event={"ID":"8e3f6352-4f16-48d4-b2d4-aaf334ba7521","Type":"ContainerStarted","Data":"6449ce5d97c581298672e431df9db30a8b84641bf72f51c3da87b487b9709270"} Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.081601 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.131854 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-7447db6c6c-r2ttb"] Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.132760 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.136033 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.136113 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-f4tdb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.152831 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-7447db6c6c-r2ttb"] Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.330382 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxs48\" (UniqueName: \"kubernetes.io/projected/a654ea14-9551-467e-bdb5-024465a33224-kube-api-access-sxs48\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.330748 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a654ea14-9551-467e-bdb5-024465a33224-apiservice-cert\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.330781 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a654ea14-9551-467e-bdb5-024465a33224-openshift-service-ca\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.330826 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a654ea14-9551-467e-bdb5-024465a33224-webhook-cert\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.444816 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-dfktt"] Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.447924 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a654ea14-9551-467e-bdb5-024465a33224-webhook-cert\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.448016 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxs48\" (UniqueName: \"kubernetes.io/projected/a654ea14-9551-467e-bdb5-024465a33224-kube-api-access-sxs48\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.448049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a654ea14-9551-467e-bdb5-024465a33224-apiservice-cert\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.448100 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a654ea14-9551-467e-bdb5-024465a33224-openshift-service-ca\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.449037 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a654ea14-9551-467e-bdb5-024465a33224-openshift-service-ca\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.454468 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a654ea14-9551-467e-bdb5-024465a33224-apiservice-cert\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.455285 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a654ea14-9551-467e-bdb5-024465a33224-webhook-cert\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.480104 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxs48\" (UniqueName: \"kubernetes.io/projected/a654ea14-9551-467e-bdb5-024465a33224-kube-api-access-sxs48\") pod \"perses-operator-7447db6c6c-r2ttb\" (UID: \"a654ea14-9551-467e-bdb5-024465a33224\") " pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:32 crc kubenswrapper[4932]: I0321 09:11:32.749179 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:33 crc kubenswrapper[4932]: I0321 09:11:33.086899 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" event={"ID":"e09365ab-596d-4dcf-b3d1-5bba08ab9f43","Type":"ContainerStarted","Data":"debc65ea0b993c4a457d8b7e791a611f15afe9805cd1dfffe7152bd958e7b876"} Mar 21 09:11:33 crc kubenswrapper[4932]: I0321 09:11:33.216163 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-7447db6c6c-r2ttb"] Mar 21 09:11:34 crc kubenswrapper[4932]: I0321 09:11:34.104621 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" event={"ID":"a654ea14-9551-467e-bdb5-024465a33224","Type":"ContainerStarted","Data":"401b57a3d6ded9c7d46a984864fef477daf7516d92c8f856c58ac239a6937ac9"} Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.197301 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" event={"ID":"e09365ab-596d-4dcf-b3d1-5bba08ab9f43","Type":"ContainerStarted","Data":"f84083ba2715fb2271c438638d45a6a9fb418dca952e535fa1b9ec421ca0cfe0"} Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.198277 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.199699 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" event={"ID":"8e3f6352-4f16-48d4-b2d4-aaf334ba7521","Type":"ContainerStarted","Data":"dee21b3a13eb1266088461bbc0df39f65b77189ca870e78b799ca725b6bdb10f"} Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.201588 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" event={"ID":"3553f81f-1424-4c1b-8c98-b05f0e2103c4","Type":"ContainerStarted","Data":"cd5c2fc69464fb10c93c8b294d12c4a4e85acd7e5b670edc545a246a5de77294"} Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.203479 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wchrv" event={"ID":"50126757-824d-4f6b-8427-1cf4299adc5c","Type":"ContainerStarted","Data":"99c9fa37332b54caa3c62c2855195a267c5e631a95b6eebf107c30acf0cf5bda"} Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.205102 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" event={"ID":"a654ea14-9551-467e-bdb5-024465a33224","Type":"ContainerStarted","Data":"2642815e94828702760479704ce1c65a078056563d79fe77fffb00ac2489ef34"} Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.205393 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.227287 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" podStartSLOduration=2.205831884 podStartE2EDuration="14.227271947s" podCreationTimestamp="2026-03-21 09:11:31 +0000 UTC" firstStartedPulling="2026-03-21 09:11:32.492608356 +0000 UTC m=+796.087806625" lastFinishedPulling="2026-03-21 09:11:44.514048419 +0000 UTC m=+808.109246688" observedRunningTime="2026-03-21 09:11:45.224789967 +0000 UTC m=+808.819988226" watchObservedRunningTime="2026-03-21 09:11:45.227271947 +0000 UTC m=+808.822470216" Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.229936 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-dfktt" Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.336921 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" podStartSLOduration=2.102621742 podStartE2EDuration="13.336903043s" podCreationTimestamp="2026-03-21 09:11:32 +0000 UTC" firstStartedPulling="2026-03-21 09:11:33.244473769 +0000 UTC m=+796.839672038" lastFinishedPulling="2026-03-21 09:11:44.47875507 +0000 UTC m=+808.073953339" observedRunningTime="2026-03-21 09:11:45.334617359 +0000 UTC m=+808.929815648" watchObservedRunningTime="2026-03-21 09:11:45.336903043 +0000 UTC m=+808.932101312" Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.338668 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-dcxqd" podStartSLOduration=1.667445388 podStartE2EDuration="14.338653068s" podCreationTimestamp="2026-03-21 09:11:31 +0000 UTC" firstStartedPulling="2026-03-21 09:11:31.820779222 +0000 UTC m=+795.415977491" lastFinishedPulling="2026-03-21 09:11:44.491986902 +0000 UTC m=+808.087185171" observedRunningTime="2026-03-21 09:11:45.26492629 +0000 UTC m=+808.860124559" watchObservedRunningTime="2026-03-21 09:11:45.338653068 +0000 UTC m=+808.933851337" Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.371475 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-974755686-rlk58" podStartSLOduration=1.715503642 podStartE2EDuration="14.371451667s" podCreationTimestamp="2026-03-21 09:11:31 +0000 UTC" firstStartedPulling="2026-03-21 09:11:31.858218528 +0000 UTC m=+795.453416797" lastFinishedPulling="2026-03-21 09:11:44.514166553 +0000 UTC m=+808.109364822" observedRunningTime="2026-03-21 09:11:45.367861573 +0000 UTC m=+808.963059842" watchObservedRunningTime="2026-03-21 09:11:45.371451667 +0000 UTC m=+808.966649936" Mar 21 09:11:45 crc kubenswrapper[4932]: I0321 09:11:45.398483 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wchrv" podStartSLOduration=2.3934305350000002 podStartE2EDuration="15.398458331s" podCreationTimestamp="2026-03-21 09:11:30 +0000 UTC" firstStartedPulling="2026-03-21 09:11:31.476057488 +0000 UTC m=+795.071255757" lastFinishedPulling="2026-03-21 09:11:44.481085284 +0000 UTC m=+808.076283553" observedRunningTime="2026-03-21 09:11:45.387574063 +0000 UTC m=+808.982772332" watchObservedRunningTime="2026-03-21 09:11:45.398458331 +0000 UTC m=+808.993656600" Mar 21 09:11:52 crc kubenswrapper[4932]: I0321 09:11:52.754030 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-7447db6c6c-r2ttb" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.131091 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568072-4bx56"] Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.132564 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568072-4bx56" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.135303 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.135800 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.137845 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.181169 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568072-4bx56"] Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.218087 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fssjk\" (UniqueName: \"kubernetes.io/projected/4099c5d2-c6c7-40c3-b462-b67e970eb8ed-kube-api-access-fssjk\") pod \"auto-csr-approver-29568072-4bx56\" (UID: \"4099c5d2-c6c7-40c3-b462-b67e970eb8ed\") " pod="openshift-infra/auto-csr-approver-29568072-4bx56" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.225751 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.225952 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.319990 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fssjk\" (UniqueName: \"kubernetes.io/projected/4099c5d2-c6c7-40c3-b462-b67e970eb8ed-kube-api-access-fssjk\") pod \"auto-csr-approver-29568072-4bx56\" (UID: \"4099c5d2-c6c7-40c3-b462-b67e970eb8ed\") " pod="openshift-infra/auto-csr-approver-29568072-4bx56" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.349371 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fssjk\" (UniqueName: \"kubernetes.io/projected/4099c5d2-c6c7-40c3-b462-b67e970eb8ed-kube-api-access-fssjk\") pod \"auto-csr-approver-29568072-4bx56\" (UID: \"4099c5d2-c6c7-40c3-b462-b67e970eb8ed\") " pod="openshift-infra/auto-csr-approver-29568072-4bx56" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.451699 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568072-4bx56" Mar 21 09:12:00 crc kubenswrapper[4932]: I0321 09:12:00.660428 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568072-4bx56"] Mar 21 09:12:00 crc kubenswrapper[4932]: W0321 09:12:00.667533 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4099c5d2_c6c7_40c3_b462_b67e970eb8ed.slice/crio-f1122251a0d6aad055e82ca058d62c7a23764fdc8ca5d5cea3165de0b7d434ff WatchSource:0}: Error finding container f1122251a0d6aad055e82ca058d62c7a23764fdc8ca5d5cea3165de0b7d434ff: Status 404 returned error can't find the container with id f1122251a0d6aad055e82ca058d62c7a23764fdc8ca5d5cea3165de0b7d434ff Mar 21 09:12:01 crc kubenswrapper[4932]: I0321 09:12:01.291968 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568072-4bx56" event={"ID":"4099c5d2-c6c7-40c3-b462-b67e970eb8ed","Type":"ContainerStarted","Data":"f1122251a0d6aad055e82ca058d62c7a23764fdc8ca5d5cea3165de0b7d434ff"} Mar 21 09:12:02 crc kubenswrapper[4932]: I0321 09:12:02.308467 4932 generic.go:334] "Generic (PLEG): container finished" podID="4099c5d2-c6c7-40c3-b462-b67e970eb8ed" containerID="115fbe92689e03821ad165ba1b6b73717ae7980cfc9e1510e94f0b67193313bc" exitCode=0 Mar 21 09:12:02 crc kubenswrapper[4932]: I0321 09:12:02.308523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568072-4bx56" event={"ID":"4099c5d2-c6c7-40c3-b462-b67e970eb8ed","Type":"ContainerDied","Data":"115fbe92689e03821ad165ba1b6b73717ae7980cfc9e1510e94f0b67193313bc"} Mar 21 09:12:03 crc kubenswrapper[4932]: I0321 09:12:03.599825 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568072-4bx56" Mar 21 09:12:03 crc kubenswrapper[4932]: I0321 09:12:03.666612 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fssjk\" (UniqueName: \"kubernetes.io/projected/4099c5d2-c6c7-40c3-b462-b67e970eb8ed-kube-api-access-fssjk\") pod \"4099c5d2-c6c7-40c3-b462-b67e970eb8ed\" (UID: \"4099c5d2-c6c7-40c3-b462-b67e970eb8ed\") " Mar 21 09:12:03 crc kubenswrapper[4932]: I0321 09:12:03.673070 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4099c5d2-c6c7-40c3-b462-b67e970eb8ed-kube-api-access-fssjk" (OuterVolumeSpecName: "kube-api-access-fssjk") pod "4099c5d2-c6c7-40c3-b462-b67e970eb8ed" (UID: "4099c5d2-c6c7-40c3-b462-b67e970eb8ed"). InnerVolumeSpecName "kube-api-access-fssjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:12:03 crc kubenswrapper[4932]: I0321 09:12:03.768623 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fssjk\" (UniqueName: \"kubernetes.io/projected/4099c5d2-c6c7-40c3-b462-b67e970eb8ed-kube-api-access-fssjk\") on node \"crc\" DevicePath \"\"" Mar 21 09:12:04 crc kubenswrapper[4932]: I0321 09:12:04.323075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568072-4bx56" event={"ID":"4099c5d2-c6c7-40c3-b462-b67e970eb8ed","Type":"ContainerDied","Data":"f1122251a0d6aad055e82ca058d62c7a23764fdc8ca5d5cea3165de0b7d434ff"} Mar 21 09:12:04 crc kubenswrapper[4932]: I0321 09:12:04.323116 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1122251a0d6aad055e82ca058d62c7a23764fdc8ca5d5cea3165de0b7d434ff" Mar 21 09:12:04 crc kubenswrapper[4932]: I0321 09:12:04.323136 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568072-4bx56" Mar 21 09:12:04 crc kubenswrapper[4932]: I0321 09:12:04.658480 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568066-dq5zd"] Mar 21 09:12:04 crc kubenswrapper[4932]: I0321 09:12:04.661945 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568066-dq5zd"] Mar 21 09:12:05 crc kubenswrapper[4932]: I0321 09:12:05.709612 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa66e26-f3bb-4771-8bdd-c349fadbac4e" path="/var/lib/kubelet/pods/6fa66e26-f3bb-4771-8bdd-c349fadbac4e/volumes" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.212400 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj"] Mar 21 09:12:09 crc kubenswrapper[4932]: E0321 09:12:09.213025 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4099c5d2-c6c7-40c3-b462-b67e970eb8ed" containerName="oc" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.213036 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4099c5d2-c6c7-40c3-b462-b67e970eb8ed" containerName="oc" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.213124 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4099c5d2-c6c7-40c3-b462-b67e970eb8ed" containerName="oc" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.213873 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.215824 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.229896 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj"] Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.336711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.336767 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.336787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8fk2\" (UniqueName: \"kubernetes.io/projected/f7fbd2ff-b5dc-4a28-8651-841f850a8099-kube-api-access-m8fk2\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.438085 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.438148 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.438177 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8fk2\" (UniqueName: \"kubernetes.io/projected/f7fbd2ff-b5dc-4a28-8651-841f850a8099-kube-api-access-m8fk2\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.438683 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.438785 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.458228 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8fk2\" (UniqueName: \"kubernetes.io/projected/f7fbd2ff-b5dc-4a28-8651-841f850a8099-kube-api-access-m8fk2\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.529387 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:09 crc kubenswrapper[4932]: I0321 09:12:09.777891 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj"] Mar 21 09:12:10 crc kubenswrapper[4932]: I0321 09:12:10.362731 4932 generic.go:334] "Generic (PLEG): container finished" podID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerID="518008fb5c37282b23860c0b17b6c2e8b203499ed151002b804b0f99223f1fd8" exitCode=0 Mar 21 09:12:10 crc kubenswrapper[4932]: I0321 09:12:10.362844 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" event={"ID":"f7fbd2ff-b5dc-4a28-8651-841f850a8099","Type":"ContainerDied","Data":"518008fb5c37282b23860c0b17b6c2e8b203499ed151002b804b0f99223f1fd8"} Mar 21 09:12:10 crc kubenswrapper[4932]: I0321 09:12:10.363150 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" event={"ID":"f7fbd2ff-b5dc-4a28-8651-841f850a8099","Type":"ContainerStarted","Data":"f780c670765a86c1854969306628a1f9d8b7cca2943ce0cc7e48b5d16dc14f86"} Mar 21 09:12:12 crc kubenswrapper[4932]: I0321 09:12:12.382494 4932 generic.go:334] "Generic (PLEG): container finished" podID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerID="6a2af14eb64f2b8c3b542d46eb99703198d05dc699d82b025c55efd448b7d8a6" exitCode=0 Mar 21 09:12:12 crc kubenswrapper[4932]: I0321 09:12:12.382836 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" event={"ID":"f7fbd2ff-b5dc-4a28-8651-841f850a8099","Type":"ContainerDied","Data":"6a2af14eb64f2b8c3b542d46eb99703198d05dc699d82b025c55efd448b7d8a6"} Mar 21 09:12:15 crc kubenswrapper[4932]: I0321 09:12:15.407793 4932 generic.go:334] "Generic (PLEG): container finished" podID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerID="ce1a077c45a89407e3d9f367f5b65061120e2e1b91c7382fc957013a72a85b9e" exitCode=0 Mar 21 09:12:15 crc kubenswrapper[4932]: I0321 09:12:15.407835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" event={"ID":"f7fbd2ff-b5dc-4a28-8651-841f850a8099","Type":"ContainerDied","Data":"ce1a077c45a89407e3d9f367f5b65061120e2e1b91c7382fc957013a72a85b9e"} Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.704618 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.831650 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8fk2\" (UniqueName: \"kubernetes.io/projected/f7fbd2ff-b5dc-4a28-8651-841f850a8099-kube-api-access-m8fk2\") pod \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.831723 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-util\") pod \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.831820 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-bundle\") pod \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\" (UID: \"f7fbd2ff-b5dc-4a28-8651-841f850a8099\") " Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.832905 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-bundle" (OuterVolumeSpecName: "bundle") pod "f7fbd2ff-b5dc-4a28-8651-841f850a8099" (UID: "f7fbd2ff-b5dc-4a28-8651-841f850a8099"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.838287 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fbd2ff-b5dc-4a28-8651-841f850a8099-kube-api-access-m8fk2" (OuterVolumeSpecName: "kube-api-access-m8fk2") pod "f7fbd2ff-b5dc-4a28-8651-841f850a8099" (UID: "f7fbd2ff-b5dc-4a28-8651-841f850a8099"). InnerVolumeSpecName "kube-api-access-m8fk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.842142 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-util" (OuterVolumeSpecName: "util") pod "f7fbd2ff-b5dc-4a28-8651-841f850a8099" (UID: "f7fbd2ff-b5dc-4a28-8651-841f850a8099"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.933864 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8fk2\" (UniqueName: \"kubernetes.io/projected/f7fbd2ff-b5dc-4a28-8651-841f850a8099-kube-api-access-m8fk2\") on node \"crc\" DevicePath \"\"" Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.933902 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-util\") on node \"crc\" DevicePath \"\"" Mar 21 09:12:16 crc kubenswrapper[4932]: I0321 09:12:16.933915 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7fbd2ff-b5dc-4a28-8651-841f850a8099-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:12:17 crc kubenswrapper[4932]: I0321 09:12:17.431482 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" event={"ID":"f7fbd2ff-b5dc-4a28-8651-841f850a8099","Type":"ContainerDied","Data":"f780c670765a86c1854969306628a1f9d8b7cca2943ce0cc7e48b5d16dc14f86"} Mar 21 09:12:17 crc kubenswrapper[4932]: I0321 09:12:17.431534 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f780c670765a86c1854969306628a1f9d8b7cca2943ce0cc7e48b5d16dc14f86" Mar 21 09:12:17 crc kubenswrapper[4932]: I0321 09:12:17.431562 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj" Mar 21 09:12:19 crc kubenswrapper[4932]: I0321 09:12:19.743115 4932 scope.go:117] "RemoveContainer" containerID="fe0e9c371b6444506c05678de87fe5477a113918227e28a193e5c1962853c1c4" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.803048 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf"] Mar 21 09:12:20 crc kubenswrapper[4932]: E0321 09:12:20.803572 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerName="util" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.803590 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerName="util" Mar 21 09:12:20 crc kubenswrapper[4932]: E0321 09:12:20.803601 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerName="pull" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.803608 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerName="pull" Mar 21 09:12:20 crc kubenswrapper[4932]: E0321 09:12:20.803635 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerName="extract" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.803642 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerName="extract" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.803736 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fbd2ff-b5dc-4a28-8651-841f850a8099" containerName="extract" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.804132 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.806251 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.806867 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-vqsqt" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.807254 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.825421 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf"] Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.886363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5bkk\" (UniqueName: \"kubernetes.io/projected/6cbaeae6-d897-4485-9378-5370ce57234e-kube-api-access-d5bkk\") pod \"nmstate-operator-796d4cfff4-pw2wf\" (UID: \"6cbaeae6-d897-4485-9378-5370ce57234e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf" Mar 21 09:12:20 crc kubenswrapper[4932]: I0321 09:12:20.988311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5bkk\" (UniqueName: \"kubernetes.io/projected/6cbaeae6-d897-4485-9378-5370ce57234e-kube-api-access-d5bkk\") pod \"nmstate-operator-796d4cfff4-pw2wf\" (UID: \"6cbaeae6-d897-4485-9378-5370ce57234e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf" Mar 21 09:12:21 crc kubenswrapper[4932]: I0321 09:12:21.010363 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5bkk\" (UniqueName: \"kubernetes.io/projected/6cbaeae6-d897-4485-9378-5370ce57234e-kube-api-access-d5bkk\") pod \"nmstate-operator-796d4cfff4-pw2wf\" (UID: \"6cbaeae6-d897-4485-9378-5370ce57234e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf" Mar 21 09:12:21 crc kubenswrapper[4932]: I0321 09:12:21.137476 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf" Mar 21 09:12:21 crc kubenswrapper[4932]: I0321 09:12:21.336626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf"] Mar 21 09:12:21 crc kubenswrapper[4932]: I0321 09:12:21.454529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf" event={"ID":"6cbaeae6-d897-4485-9378-5370ce57234e","Type":"ContainerStarted","Data":"e7f57349a10f8bb6034cdfb077e814de1ecc132f151d354d46452c0b7f3aa74b"} Mar 21 09:12:24 crc kubenswrapper[4932]: I0321 09:12:24.474423 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf" event={"ID":"6cbaeae6-d897-4485-9378-5370ce57234e","Type":"ContainerStarted","Data":"9194a2678babf3d41f37b7707a33335ce9ad901f7b190440193ba25a5da73480"} Mar 21 09:12:24 crc kubenswrapper[4932]: I0321 09:12:24.496590 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-pw2wf" podStartSLOduration=2.385560567 podStartE2EDuration="4.496567982s" podCreationTimestamp="2026-03-21 09:12:20 +0000 UTC" firstStartedPulling="2026-03-21 09:12:21.35557305 +0000 UTC m=+844.950771319" lastFinishedPulling="2026-03-21 09:12:23.466580465 +0000 UTC m=+847.061778734" observedRunningTime="2026-03-21 09:12:24.490923362 +0000 UTC m=+848.086121651" watchObservedRunningTime="2026-03-21 09:12:24.496567982 +0000 UTC m=+848.091766261" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.204168 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.206825 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.209136 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-w8m2f" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.216216 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.225039 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.225082 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.246472 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.247244 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.253002 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.265387 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mtqtd"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.266601 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.276733 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.330474 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-ovs-socket\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.330548 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9m86\" (UniqueName: \"kubernetes.io/projected/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-kube-api-access-j9m86\") pod \"nmstate-webhook-5f558f5558-vj7tk\" (UID: \"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.330600 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjfg4\" (UniqueName: \"kubernetes.io/projected/701f4786-9ccc-4178-a2f7-b88ec63c7a81-kube-api-access-tjfg4\") pod \"nmstate-metrics-9b8c8685d-2j2f2\" (UID: \"701f4786-9ccc-4178-a2f7-b88ec63c7a81\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.330622 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-dbus-socket\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.330698 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-nmstate-lock\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.330748 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7d5\" (UniqueName: \"kubernetes.io/projected/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-kube-api-access-tz7d5\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.330785 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-vj7tk\" (UID: \"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.360648 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.361574 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.363222 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.363286 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.363411 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lrltz" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.372901 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432430 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9cb520c-96ae-4782-bd71-060a6de3c212-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432524 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e9cb520c-96ae-4782-bd71-060a6de3c212-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-ovs-socket\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432645 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9m86\" (UniqueName: \"kubernetes.io/projected/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-kube-api-access-j9m86\") pod \"nmstate-webhook-5f558f5558-vj7tk\" (UID: \"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432684 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjfg4\" (UniqueName: \"kubernetes.io/projected/701f4786-9ccc-4178-a2f7-b88ec63c7a81-kube-api-access-tjfg4\") pod \"nmstate-metrics-9b8c8685d-2j2f2\" (UID: \"701f4786-9ccc-4178-a2f7-b88ec63c7a81\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-dbus-socket\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432716 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-ovs-socket\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432729 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-nmstate-lock\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-nmstate-lock\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432848 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7d5\" (UniqueName: \"kubernetes.io/projected/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-kube-api-access-tz7d5\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432904 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-vj7tk\" (UID: \"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.432945 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfw7j\" (UniqueName: \"kubernetes.io/projected/e9cb520c-96ae-4782-bd71-060a6de3c212-kube-api-access-kfw7j\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.433012 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-dbus-socket\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: E0321 09:12:30.433136 4932 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 21 09:12:30 crc kubenswrapper[4932]: E0321 09:12:30.433210 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-tls-key-pair podName:d335f065-a5e0-46ca-b99c-61b2b2dbb3ea nodeName:}" failed. No retries permitted until 2026-03-21 09:12:30.933186548 +0000 UTC m=+854.528384817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-tls-key-pair") pod "nmstate-webhook-5f558f5558-vj7tk" (UID: "d335f065-a5e0-46ca-b99c-61b2b2dbb3ea") : secret "openshift-nmstate-webhook" not found Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.452924 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7d5\" (UniqueName: \"kubernetes.io/projected/0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775-kube-api-access-tz7d5\") pod \"nmstate-handler-mtqtd\" (UID: \"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775\") " pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.453010 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjfg4\" (UniqueName: \"kubernetes.io/projected/701f4786-9ccc-4178-a2f7-b88ec63c7a81-kube-api-access-tjfg4\") pod \"nmstate-metrics-9b8c8685d-2j2f2\" (UID: \"701f4786-9ccc-4178-a2f7-b88ec63c7a81\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.454211 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9m86\" (UniqueName: \"kubernetes.io/projected/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-kube-api-access-j9m86\") pod \"nmstate-webhook-5f558f5558-vj7tk\" (UID: \"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.527164 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.534518 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9cb520c-96ae-4782-bd71-060a6de3c212-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.534697 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e9cb520c-96ae-4782-bd71-060a6de3c212-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.534886 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfw7j\" (UniqueName: \"kubernetes.io/projected/e9cb520c-96ae-4782-bd71-060a6de3c212-kube-api-access-kfw7j\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.535789 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e9cb520c-96ae-4782-bd71-060a6de3c212-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.537951 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9cb520c-96ae-4782-bd71-060a6de3c212-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.558460 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55654448d6-vhn5c"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.559630 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.583929 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.584036 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55654448d6-vhn5c"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.588443 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfw7j\" (UniqueName: \"kubernetes.io/projected/e9cb520c-96ae-4782-bd71-060a6de3c212-kube-api-access-kfw7j\") pod \"nmstate-console-plugin-86f58fcf4-6npgh\" (UID: \"e9cb520c-96ae-4782-bd71-060a6de3c212\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.637069 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87d1cd17-9cdb-438c-98c1-c12043109f10-console-serving-cert\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.637138 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-trusted-ca-bundle\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.637208 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-oauth-serving-cert\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.637262 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87d1cd17-9cdb-438c-98c1-c12043109f10-console-oauth-config\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.637298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-console-config\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.637325 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-service-ca\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.637394 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwn4\" (UniqueName: \"kubernetes.io/projected/87d1cd17-9cdb-438c-98c1-c12043109f10-kube-api-access-6hwn4\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.685281 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.738135 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-oauth-serving-cert\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.738493 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87d1cd17-9cdb-438c-98c1-c12043109f10-console-oauth-config\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.738713 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-console-config\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.738730 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-service-ca\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.738749 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hwn4\" (UniqueName: \"kubernetes.io/projected/87d1cd17-9cdb-438c-98c1-c12043109f10-kube-api-access-6hwn4\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.738783 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87d1cd17-9cdb-438c-98c1-c12043109f10-console-serving-cert\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.738802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-trusted-ca-bundle\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.739550 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-oauth-serving-cert\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.740803 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-console-config\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.741310 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-trusted-ca-bundle\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.742380 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/87d1cd17-9cdb-438c-98c1-c12043109f10-service-ca\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.747104 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/87d1cd17-9cdb-438c-98c1-c12043109f10-console-serving-cert\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.748178 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/87d1cd17-9cdb-438c-98c1-c12043109f10-console-oauth-config\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.755614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hwn4\" (UniqueName: \"kubernetes.io/projected/87d1cd17-9cdb-438c-98c1-c12043109f10-kube-api-access-6hwn4\") pod \"console-55654448d6-vhn5c\" (UID: \"87d1cd17-9cdb-438c-98c1-c12043109f10\") " pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.799069 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2"] Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.918689 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh"] Mar 21 09:12:30 crc kubenswrapper[4932]: W0321 09:12:30.929059 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9cb520c_96ae_4782_bd71_060a6de3c212.slice/crio-1e95213c79938c7a73b22f95bbf591d14eeaa014f34ca4651d58a2d272a6913c WatchSource:0}: Error finding container 1e95213c79938c7a73b22f95bbf591d14eeaa014f34ca4651d58a2d272a6913c: Status 404 returned error can't find the container with id 1e95213c79938c7a73b22f95bbf591d14eeaa014f34ca4651d58a2d272a6913c Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.935707 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.941387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-vj7tk\" (UID: \"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:30 crc kubenswrapper[4932]: I0321 09:12:30.944559 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d335f065-a5e0-46ca-b99c-61b2b2dbb3ea-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-vj7tk\" (UID: \"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.124542 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55654448d6-vhn5c"] Mar 21 09:12:31 crc kubenswrapper[4932]: W0321 09:12:31.136500 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87d1cd17_9cdb_438c_98c1_c12043109f10.slice/crio-4a9ee3cceeb2cfd5abd1d4cdd5ff084bb63b114bf5e3c114ada1cf4c5a1b2a3c WatchSource:0}: Error finding container 4a9ee3cceeb2cfd5abd1d4cdd5ff084bb63b114bf5e3c114ada1cf4c5a1b2a3c: Status 404 returned error can't find the container with id 4a9ee3cceeb2cfd5abd1d4cdd5ff084bb63b114bf5e3c114ada1cf4c5a1b2a3c Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.164838 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.348209 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk"] Mar 21 09:12:31 crc kubenswrapper[4932]: W0321 09:12:31.357993 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd335f065_a5e0_46ca_b99c_61b2b2dbb3ea.slice/crio-b8a853429e6de24f5dac9803421e96c0a41d20db1a2405ce757d033d734d8991 WatchSource:0}: Error finding container b8a853429e6de24f5dac9803421e96c0a41d20db1a2405ce757d033d734d8991: Status 404 returned error can't find the container with id b8a853429e6de24f5dac9803421e96c0a41d20db1a2405ce757d033d734d8991 Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.535633 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" event={"ID":"e9cb520c-96ae-4782-bd71-060a6de3c212","Type":"ContainerStarted","Data":"1e95213c79938c7a73b22f95bbf591d14eeaa014f34ca4651d58a2d272a6913c"} Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.536671 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mtqtd" event={"ID":"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775","Type":"ContainerStarted","Data":"d45155e8ce986de34a67b7b18c89a12f651985e97b58d09583836213c0db0721"} Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.538308 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55654448d6-vhn5c" event={"ID":"87d1cd17-9cdb-438c-98c1-c12043109f10","Type":"ContainerStarted","Data":"62066a06515bc53375bbff1f8bfb42879b51d8e2c785d989e351a0ee64b1863f"} Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.538378 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55654448d6-vhn5c" event={"ID":"87d1cd17-9cdb-438c-98c1-c12043109f10","Type":"ContainerStarted","Data":"4a9ee3cceeb2cfd5abd1d4cdd5ff084bb63b114bf5e3c114ada1cf4c5a1b2a3c"} Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.539260 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" event={"ID":"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea","Type":"ContainerStarted","Data":"b8a853429e6de24f5dac9803421e96c0a41d20db1a2405ce757d033d734d8991"} Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.540125 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" event={"ID":"701f4786-9ccc-4178-a2f7-b88ec63c7a81","Type":"ContainerStarted","Data":"a39431b0d0bba755fc866c33e5bbdb96953af4e461f26292bb1c1849e1add1fc"} Mar 21 09:12:31 crc kubenswrapper[4932]: I0321 09:12:31.561207 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55654448d6-vhn5c" podStartSLOduration=1.561180097 podStartE2EDuration="1.561180097s" podCreationTimestamp="2026-03-21 09:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:12:31.554409873 +0000 UTC m=+855.149608162" watchObservedRunningTime="2026-03-21 09:12:31.561180097 +0000 UTC m=+855.156378386" Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.562580 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" event={"ID":"e9cb520c-96ae-4782-bd71-060a6de3c212","Type":"ContainerStarted","Data":"bea4680a049c19526fead47b5a8252ba596ea3bcba6c1eb42d2144ec0ea47719"} Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.567429 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mtqtd" event={"ID":"0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775","Type":"ContainerStarted","Data":"23c9144ba79131412e496f74d140eefe06411573c14038e5281868005dc4ed63"} Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.568225 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.573465 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" event={"ID":"d335f065-a5e0-46ca-b99c-61b2b2dbb3ea","Type":"ContainerStarted","Data":"faad1e0923cb426b0b722276f16f37a2df2fa67fe3aa1680d41c013d4db4d1fb"} Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.573832 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.576374 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" event={"ID":"701f4786-9ccc-4178-a2f7-b88ec63c7a81","Type":"ContainerStarted","Data":"77ce1cdcc32cde73d6f2af803ab6035c06977462774c68254ca41977cc858da5"} Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.589220 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-6npgh" podStartSLOduration=1.865922894 podStartE2EDuration="4.589191861s" podCreationTimestamp="2026-03-21 09:12:30 +0000 UTC" firstStartedPulling="2026-03-21 09:12:30.931815312 +0000 UTC m=+854.527013581" lastFinishedPulling="2026-03-21 09:12:33.655084279 +0000 UTC m=+857.250282548" observedRunningTime="2026-03-21 09:12:34.584123271 +0000 UTC m=+858.179321530" watchObservedRunningTime="2026-03-21 09:12:34.589191861 +0000 UTC m=+858.184390130" Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.606825 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mtqtd" podStartSLOduration=1.587158043 podStartE2EDuration="4.606793831s" podCreationTimestamp="2026-03-21 09:12:30 +0000 UTC" firstStartedPulling="2026-03-21 09:12:30.644640213 +0000 UTC m=+854.239838482" lastFinishedPulling="2026-03-21 09:12:33.664275991 +0000 UTC m=+857.259474270" observedRunningTime="2026-03-21 09:12:34.603065792 +0000 UTC m=+858.198264061" watchObservedRunningTime="2026-03-21 09:12:34.606793831 +0000 UTC m=+858.201992100" Mar 21 09:12:34 crc kubenswrapper[4932]: I0321 09:12:34.623113 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" podStartSLOduration=2.30852317 podStartE2EDuration="4.623092788s" podCreationTimestamp="2026-03-21 09:12:30 +0000 UTC" firstStartedPulling="2026-03-21 09:12:31.360315589 +0000 UTC m=+854.955513868" lastFinishedPulling="2026-03-21 09:12:33.674885217 +0000 UTC m=+857.270083486" observedRunningTime="2026-03-21 09:12:34.622342094 +0000 UTC m=+858.217540383" watchObservedRunningTime="2026-03-21 09:12:34.623092788 +0000 UTC m=+858.218291057" Mar 21 09:12:36 crc kubenswrapper[4932]: I0321 09:12:36.592811 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" event={"ID":"701f4786-9ccc-4178-a2f7-b88ec63c7a81","Type":"ContainerStarted","Data":"70680350210ed5c20d26da22c431c64285029351b06d8972979423225c7cdd59"} Mar 21 09:12:36 crc kubenswrapper[4932]: I0321 09:12:36.614708 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-2j2f2" podStartSLOduration=1.4001271339999999 podStartE2EDuration="6.6146875s" podCreationTimestamp="2026-03-21 09:12:30 +0000 UTC" firstStartedPulling="2026-03-21 09:12:30.810046176 +0000 UTC m=+854.405244445" lastFinishedPulling="2026-03-21 09:12:36.024606542 +0000 UTC m=+859.619804811" observedRunningTime="2026-03-21 09:12:36.608990909 +0000 UTC m=+860.204189208" watchObservedRunningTime="2026-03-21 09:12:36.6146875 +0000 UTC m=+860.209885769" Mar 21 09:12:40 crc kubenswrapper[4932]: I0321 09:12:40.607317 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mtqtd" Mar 21 09:12:40 crc kubenswrapper[4932]: I0321 09:12:40.936016 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:40 crc kubenswrapper[4932]: I0321 09:12:40.936072 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:40 crc kubenswrapper[4932]: I0321 09:12:40.940126 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:41 crc kubenswrapper[4932]: I0321 09:12:41.231052 4932 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 21 09:12:41 crc kubenswrapper[4932]: I0321 09:12:41.625144 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55654448d6-vhn5c" Mar 21 09:12:41 crc kubenswrapper[4932]: I0321 09:12:41.685783 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v7lxk"] Mar 21 09:12:51 crc kubenswrapper[4932]: I0321 09:12:51.174429 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-vj7tk" Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.225289 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.225835 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.225877 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.226575 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8decec33670c2842c92261f8dd47259d538fd63496f5f2522fb72de9d5a14bf4"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.226637 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://8decec33670c2842c92261f8dd47259d538fd63496f5f2522fb72de9d5a14bf4" gracePeriod=600 Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.767339 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="8decec33670c2842c92261f8dd47259d538fd63496f5f2522fb72de9d5a14bf4" exitCode=0 Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.767553 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"8decec33670c2842c92261f8dd47259d538fd63496f5f2522fb72de9d5a14bf4"} Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.768130 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"91e3049bee861f37df7d289eee8d3ed5ac012bf0c17d73d1859a4aa9a278e9f2"} Mar 21 09:13:00 crc kubenswrapper[4932]: I0321 09:13:00.768155 4932 scope.go:117] "RemoveContainer" containerID="edd87e1d2bb0a42fc79fe0fc8fafbf90d9697e65cfd8fc6baf73b7211d563b23" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.194012 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx"] Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.195559 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.197302 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.205532 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx"] Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.234173 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8hnt\" (UniqueName: \"kubernetes.io/projected/b38770bf-4b6f-49e8-9295-3bbe5014817e-kube-api-access-g8hnt\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.234306 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.234382 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.335606 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.335680 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.335718 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8hnt\" (UniqueName: \"kubernetes.io/projected/b38770bf-4b6f-49e8-9295-3bbe5014817e-kube-api-access-g8hnt\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.336132 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.336199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.356254 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8hnt\" (UniqueName: \"kubernetes.io/projected/b38770bf-4b6f-49e8-9295-3bbe5014817e-kube-api-access-g8hnt\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.512748 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:05 crc kubenswrapper[4932]: I0321 09:13:05.914895 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx"] Mar 21 09:13:06 crc kubenswrapper[4932]: I0321 09:13:06.734875 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-v7lxk" podUID="a5e1cc78-be1f-45a2-87b3-73c62790c894" containerName="console" containerID="cri-o://ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0" gracePeriod=15 Mar 21 09:13:06 crc kubenswrapper[4932]: I0321 09:13:06.801848 4932 generic.go:334] "Generic (PLEG): container finished" podID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerID="a653cf150b1aacd58d4b7bf4836d6bdf2f9c1f82988547f59c170141c7687fc0" exitCode=0 Mar 21 09:13:06 crc kubenswrapper[4932]: I0321 09:13:06.801904 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" event={"ID":"b38770bf-4b6f-49e8-9295-3bbe5014817e","Type":"ContainerDied","Data":"a653cf150b1aacd58d4b7bf4836d6bdf2f9c1f82988547f59c170141c7687fc0"} Mar 21 09:13:06 crc kubenswrapper[4932]: I0321 09:13:06.801947 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" event={"ID":"b38770bf-4b6f-49e8-9295-3bbe5014817e","Type":"ContainerStarted","Data":"c9e586a64fc3ae748d5a41accf70c4445e4b97a908dc4e7fe3509e77ae9f2cd9"} Mar 21 09:13:06 crc kubenswrapper[4932]: I0321 09:13:06.803508 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.138646 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v7lxk_a5e1cc78-be1f-45a2-87b3-73c62790c894/console/0.log" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.138905 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.262732 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-oauth-serving-cert\") pod \"a5e1cc78-be1f-45a2-87b3-73c62790c894\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.262777 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-config\") pod \"a5e1cc78-be1f-45a2-87b3-73c62790c894\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.262813 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh95x\" (UniqueName: \"kubernetes.io/projected/a5e1cc78-be1f-45a2-87b3-73c62790c894-kube-api-access-lh95x\") pod \"a5e1cc78-be1f-45a2-87b3-73c62790c894\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.262860 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-serving-cert\") pod \"a5e1cc78-be1f-45a2-87b3-73c62790c894\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.262880 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-oauth-config\") pod \"a5e1cc78-be1f-45a2-87b3-73c62790c894\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.262906 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-service-ca\") pod \"a5e1cc78-be1f-45a2-87b3-73c62790c894\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.262926 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-trusted-ca-bundle\") pod \"a5e1cc78-be1f-45a2-87b3-73c62790c894\" (UID: \"a5e1cc78-be1f-45a2-87b3-73c62790c894\") " Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.263566 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a5e1cc78-be1f-45a2-87b3-73c62790c894" (UID: "a5e1cc78-be1f-45a2-87b3-73c62790c894"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.263579 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-config" (OuterVolumeSpecName: "console-config") pod "a5e1cc78-be1f-45a2-87b3-73c62790c894" (UID: "a5e1cc78-be1f-45a2-87b3-73c62790c894"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.263620 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a5e1cc78-be1f-45a2-87b3-73c62790c894" (UID: "a5e1cc78-be1f-45a2-87b3-73c62790c894"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.263953 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-service-ca" (OuterVolumeSpecName: "service-ca") pod "a5e1cc78-be1f-45a2-87b3-73c62790c894" (UID: "a5e1cc78-be1f-45a2-87b3-73c62790c894"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.269995 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a5e1cc78-be1f-45a2-87b3-73c62790c894" (UID: "a5e1cc78-be1f-45a2-87b3-73c62790c894"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.270837 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e1cc78-be1f-45a2-87b3-73c62790c894-kube-api-access-lh95x" (OuterVolumeSpecName: "kube-api-access-lh95x") pod "a5e1cc78-be1f-45a2-87b3-73c62790c894" (UID: "a5e1cc78-be1f-45a2-87b3-73c62790c894"). InnerVolumeSpecName "kube-api-access-lh95x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.275789 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a5e1cc78-be1f-45a2-87b3-73c62790c894" (UID: "a5e1cc78-be1f-45a2-87b3-73c62790c894"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.364176 4932 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.364219 4932 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.364231 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh95x\" (UniqueName: \"kubernetes.io/projected/a5e1cc78-be1f-45a2-87b3-73c62790c894-kube-api-access-lh95x\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.364241 4932 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.364249 4932 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e1cc78-be1f-45a2-87b3-73c62790c894-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.364257 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.364265 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e1cc78-be1f-45a2-87b3-73c62790c894-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.809906 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v7lxk_a5e1cc78-be1f-45a2-87b3-73c62790c894/console/0.log" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.809959 4932 generic.go:334] "Generic (PLEG): container finished" podID="a5e1cc78-be1f-45a2-87b3-73c62790c894" containerID="ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0" exitCode=2 Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.809986 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v7lxk" event={"ID":"a5e1cc78-be1f-45a2-87b3-73c62790c894","Type":"ContainerDied","Data":"ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0"} Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.810020 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v7lxk" event={"ID":"a5e1cc78-be1f-45a2-87b3-73c62790c894","Type":"ContainerDied","Data":"554085bb79def92c92417e03029964c9e8e451f1ff07a2f19853e6f927adf800"} Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.810037 4932 scope.go:117] "RemoveContainer" containerID="ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.810074 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v7lxk" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.827825 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v7lxk"] Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.832268 4932 scope.go:117] "RemoveContainer" containerID="ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0" Mar 21 09:13:07 crc kubenswrapper[4932]: E0321 09:13:07.832980 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0\": container with ID starting with ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0 not found: ID does not exist" containerID="ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.833081 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0"} err="failed to get container status \"ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0\": rpc error: code = NotFound desc = could not find container \"ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0\": container with ID starting with ea83108f7eb4c8b225ce7602d186d296cdbf6c663b2a53068b2dd70979dfebd0 not found: ID does not exist" Mar 21 09:13:07 crc kubenswrapper[4932]: I0321 09:13:07.838668 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-v7lxk"] Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.749120 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tpb8w"] Mar 21 09:13:08 crc kubenswrapper[4932]: E0321 09:13:08.749636 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e1cc78-be1f-45a2-87b3-73c62790c894" containerName="console" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.749646 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e1cc78-be1f-45a2-87b3-73c62790c894" containerName="console" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.749770 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e1cc78-be1f-45a2-87b3-73c62790c894" containerName="console" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.750551 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.763658 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tpb8w"] Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.798123 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-utilities\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.798198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-catalog-content\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.798281 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2rxt\" (UniqueName: \"kubernetes.io/projected/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-kube-api-access-t2rxt\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.817604 4932 generic.go:334] "Generic (PLEG): container finished" podID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerID="9387d2f17e6a11e4b41349e5989eeefc81865f7a845309e1f9a3cd940f4f3833" exitCode=0 Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.817636 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" event={"ID":"b38770bf-4b6f-49e8-9295-3bbe5014817e","Type":"ContainerDied","Data":"9387d2f17e6a11e4b41349e5989eeefc81865f7a845309e1f9a3cd940f4f3833"} Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.899863 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2rxt\" (UniqueName: \"kubernetes.io/projected/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-kube-api-access-t2rxt\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.899977 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-utilities\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.900018 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-catalog-content\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.900507 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-utilities\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.900551 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-catalog-content\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:08 crc kubenswrapper[4932]: I0321 09:13:08.918831 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2rxt\" (UniqueName: \"kubernetes.io/projected/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-kube-api-access-t2rxt\") pod \"redhat-operators-tpb8w\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:09 crc kubenswrapper[4932]: I0321 09:13:09.074310 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:09 crc kubenswrapper[4932]: I0321 09:13:09.517205 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tpb8w"] Mar 21 09:13:09 crc kubenswrapper[4932]: W0321 09:13:09.530686 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad685c8a_aa8f_45e2_8b66_cec1b99ad66c.slice/crio-b82c5ff920bad3d1fb1cfb7b095cc5b6eeea7012c8fe5fe02d23e5bcee83a4e3 WatchSource:0}: Error finding container b82c5ff920bad3d1fb1cfb7b095cc5b6eeea7012c8fe5fe02d23e5bcee83a4e3: Status 404 returned error can't find the container with id b82c5ff920bad3d1fb1cfb7b095cc5b6eeea7012c8fe5fe02d23e5bcee83a4e3 Mar 21 09:13:09 crc kubenswrapper[4932]: I0321 09:13:09.710576 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e1cc78-be1f-45a2-87b3-73c62790c894" path="/var/lib/kubelet/pods/a5e1cc78-be1f-45a2-87b3-73c62790c894/volumes" Mar 21 09:13:09 crc kubenswrapper[4932]: I0321 09:13:09.825746 4932 generic.go:334] "Generic (PLEG): container finished" podID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerID="bdd5a66050d98b89796889f213e282a76b140f38fe5ec0fc53b0552787e72c88" exitCode=0 Mar 21 09:13:09 crc kubenswrapper[4932]: I0321 09:13:09.825794 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" event={"ID":"b38770bf-4b6f-49e8-9295-3bbe5014817e","Type":"ContainerDied","Data":"bdd5a66050d98b89796889f213e282a76b140f38fe5ec0fc53b0552787e72c88"} Mar 21 09:13:09 crc kubenswrapper[4932]: I0321 09:13:09.829212 4932 generic.go:334] "Generic (PLEG): container finished" podID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerID="a442cec9c37b8bcc745493f4ea98be30e8c8b1c303611d51ba2d56f02b0d8a57" exitCode=0 Mar 21 09:13:09 crc kubenswrapper[4932]: I0321 09:13:09.829258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpb8w" event={"ID":"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c","Type":"ContainerDied","Data":"a442cec9c37b8bcc745493f4ea98be30e8c8b1c303611d51ba2d56f02b0d8a57"} Mar 21 09:13:09 crc kubenswrapper[4932]: I0321 09:13:09.829285 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpb8w" event={"ID":"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c","Type":"ContainerStarted","Data":"b82c5ff920bad3d1fb1cfb7b095cc5b6eeea7012c8fe5fe02d23e5bcee83a4e3"} Mar 21 09:13:10 crc kubenswrapper[4932]: I0321 09:13:10.836463 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpb8w" event={"ID":"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c","Type":"ContainerStarted","Data":"44e65a989f318a3ff77b9f512611c9dbd3d6df77f3c8b843d25fec10c725708e"} Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.088864 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.126705 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-util\") pod \"b38770bf-4b6f-49e8-9295-3bbe5014817e\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.126815 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-bundle\") pod \"b38770bf-4b6f-49e8-9295-3bbe5014817e\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.126940 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8hnt\" (UniqueName: \"kubernetes.io/projected/b38770bf-4b6f-49e8-9295-3bbe5014817e-kube-api-access-g8hnt\") pod \"b38770bf-4b6f-49e8-9295-3bbe5014817e\" (UID: \"b38770bf-4b6f-49e8-9295-3bbe5014817e\") " Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.127727 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-bundle" (OuterVolumeSpecName: "bundle") pod "b38770bf-4b6f-49e8-9295-3bbe5014817e" (UID: "b38770bf-4b6f-49e8-9295-3bbe5014817e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.132626 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38770bf-4b6f-49e8-9295-3bbe5014817e-kube-api-access-g8hnt" (OuterVolumeSpecName: "kube-api-access-g8hnt") pod "b38770bf-4b6f-49e8-9295-3bbe5014817e" (UID: "b38770bf-4b6f-49e8-9295-3bbe5014817e"). InnerVolumeSpecName "kube-api-access-g8hnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.142676 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-util" (OuterVolumeSpecName: "util") pod "b38770bf-4b6f-49e8-9295-3bbe5014817e" (UID: "b38770bf-4b6f-49e8-9295-3bbe5014817e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.229912 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8hnt\" (UniqueName: \"kubernetes.io/projected/b38770bf-4b6f-49e8-9295-3bbe5014817e-kube-api-access-g8hnt\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.229957 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-util\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.229969 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b38770bf-4b6f-49e8-9295-3bbe5014817e-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.849097 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" event={"ID":"b38770bf-4b6f-49e8-9295-3bbe5014817e","Type":"ContainerDied","Data":"c9e586a64fc3ae748d5a41accf70c4445e4b97a908dc4e7fe3509e77ae9f2cd9"} Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.849171 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9e586a64fc3ae748d5a41accf70c4445e4b97a908dc4e7fe3509e77ae9f2cd9" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.849243 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx" Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.853590 4932 generic.go:334] "Generic (PLEG): container finished" podID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerID="44e65a989f318a3ff77b9f512611c9dbd3d6df77f3c8b843d25fec10c725708e" exitCode=0 Mar 21 09:13:11 crc kubenswrapper[4932]: I0321 09:13:11.853674 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpb8w" event={"ID":"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c","Type":"ContainerDied","Data":"44e65a989f318a3ff77b9f512611c9dbd3d6df77f3c8b843d25fec10c725708e"} Mar 21 09:13:12 crc kubenswrapper[4932]: I0321 09:13:12.862835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpb8w" event={"ID":"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c","Type":"ContainerStarted","Data":"53ee3918fc9dc4e6e758c65777cfa6ef8ed63817051b00b4c3ff3b796ac3c5d7"} Mar 21 09:13:12 crc kubenswrapper[4932]: I0321 09:13:12.878962 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tpb8w" podStartSLOduration=2.304090422 podStartE2EDuration="4.878947237s" podCreationTimestamp="2026-03-21 09:13:08 +0000 UTC" firstStartedPulling="2026-03-21 09:13:09.830586835 +0000 UTC m=+893.425785104" lastFinishedPulling="2026-03-21 09:13:12.40544364 +0000 UTC m=+896.000641919" observedRunningTime="2026-03-21 09:13:12.878141401 +0000 UTC m=+896.473339670" watchObservedRunningTime="2026-03-21 09:13:12.878947237 +0000 UTC m=+896.474145506" Mar 21 09:13:19 crc kubenswrapper[4932]: I0321 09:13:19.074800 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:19 crc kubenswrapper[4932]: I0321 09:13:19.075381 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:19 crc kubenswrapper[4932]: I0321 09:13:19.118597 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:19 crc kubenswrapper[4932]: I0321 09:13:19.942493 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.407461 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96"] Mar 21 09:13:20 crc kubenswrapper[4932]: E0321 09:13:20.407697 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerName="pull" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.407709 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerName="pull" Mar 21 09:13:20 crc kubenswrapper[4932]: E0321 09:13:20.407730 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerName="extract" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.407736 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerName="extract" Mar 21 09:13:20 crc kubenswrapper[4932]: E0321 09:13:20.407746 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerName="util" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.407752 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerName="util" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.407846 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38770bf-4b6f-49e8-9295-3bbe5014817e" containerName="extract" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.408264 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.414227 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.414842 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.415039 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.416382 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.416720 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-l6rjn" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.425707 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96"] Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.466620 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a2f426b-c3bb-4248-8a49-fba11b225c08-webhook-cert\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.466680 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a2f426b-c3bb-4248-8a49-fba11b225c08-apiservice-cert\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.466720 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcchv\" (UniqueName: \"kubernetes.io/projected/7a2f426b-c3bb-4248-8a49-fba11b225c08-kube-api-access-wcchv\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.567948 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a2f426b-c3bb-4248-8a49-fba11b225c08-webhook-cert\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.568006 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a2f426b-c3bb-4248-8a49-fba11b225c08-apiservice-cert\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.568029 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcchv\" (UniqueName: \"kubernetes.io/projected/7a2f426b-c3bb-4248-8a49-fba11b225c08-kube-api-access-wcchv\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.594633 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a2f426b-c3bb-4248-8a49-fba11b225c08-apiservice-cert\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.595419 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a2f426b-c3bb-4248-8a49-fba11b225c08-webhook-cert\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.597953 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcchv\" (UniqueName: \"kubernetes.io/projected/7a2f426b-c3bb-4248-8a49-fba11b225c08-kube-api-access-wcchv\") pod \"metallb-operator-controller-manager-89c599fbb-6tg96\" (UID: \"7a2f426b-c3bb-4248-8a49-fba11b225c08\") " pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.726106 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.841811 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl"] Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.843792 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.850306 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.850370 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wh9xk" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.851152 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.866608 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl"] Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.973515 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-webhook-cert\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.973603 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-apiservice-cert\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:20 crc kubenswrapper[4932]: I0321 09:13:20.973649 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhnkp\" (UniqueName: \"kubernetes.io/projected/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-kube-api-access-mhnkp\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.075548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-webhook-cert\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.075647 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-apiservice-cert\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.075684 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhnkp\" (UniqueName: \"kubernetes.io/projected/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-kube-api-access-mhnkp\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.081473 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-apiservice-cert\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.096922 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-webhook-cert\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.097088 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhnkp\" (UniqueName: \"kubernetes.io/projected/d361ba9f-cc61-4e3c-a206-0ac2bc5ac090-kube-api-access-mhnkp\") pod \"metallb-operator-webhook-server-58bf4cc6c7-jtsrl\" (UID: \"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090\") " pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.176426 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.179588 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96"] Mar 21 09:13:21 crc kubenswrapper[4932]: W0321 09:13:21.182474 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a2f426b_c3bb_4248_8a49_fba11b225c08.slice/crio-bdcba3381d70b6c3598979eb3375d321e53e3829172300b4f340787d4b8c8378 WatchSource:0}: Error finding container bdcba3381d70b6c3598979eb3375d321e53e3829172300b4f340787d4b8c8378: Status 404 returned error can't find the container with id bdcba3381d70b6c3598979eb3375d321e53e3829172300b4f340787d4b8c8378 Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.610271 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl"] Mar 21 09:13:21 crc kubenswrapper[4932]: W0321 09:13:21.621714 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd361ba9f_cc61_4e3c_a206_0ac2bc5ac090.slice/crio-f4f2019af98e5267742aa515b48b9a87a5db43556b75c6c4f49b6199638a6092 WatchSource:0}: Error finding container f4f2019af98e5267742aa515b48b9a87a5db43556b75c6c4f49b6199638a6092: Status 404 returned error can't find the container with id f4f2019af98e5267742aa515b48b9a87a5db43556b75c6c4f49b6199638a6092 Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.915559 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" event={"ID":"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090","Type":"ContainerStarted","Data":"f4f2019af98e5267742aa515b48b9a87a5db43556b75c6c4f49b6199638a6092"} Mar 21 09:13:21 crc kubenswrapper[4932]: I0321 09:13:21.917190 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" event={"ID":"7a2f426b-c3bb-4248-8a49-fba11b225c08","Type":"ContainerStarted","Data":"bdcba3381d70b6c3598979eb3375d321e53e3829172300b4f340787d4b8c8378"} Mar 21 09:13:22 crc kubenswrapper[4932]: I0321 09:13:22.740782 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tpb8w"] Mar 21 09:13:22 crc kubenswrapper[4932]: I0321 09:13:22.741056 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tpb8w" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerName="registry-server" containerID="cri-o://53ee3918fc9dc4e6e758c65777cfa6ef8ed63817051b00b4c3ff3b796ac3c5d7" gracePeriod=2 Mar 21 09:13:22 crc kubenswrapper[4932]: I0321 09:13:22.925923 4932 generic.go:334] "Generic (PLEG): container finished" podID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerID="53ee3918fc9dc4e6e758c65777cfa6ef8ed63817051b00b4c3ff3b796ac3c5d7" exitCode=0 Mar 21 09:13:22 crc kubenswrapper[4932]: I0321 09:13:22.925985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpb8w" event={"ID":"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c","Type":"ContainerDied","Data":"53ee3918fc9dc4e6e758c65777cfa6ef8ed63817051b00b4c3ff3b796ac3c5d7"} Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.150977 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.204089 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-catalog-content\") pod \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.204194 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2rxt\" (UniqueName: \"kubernetes.io/projected/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-kube-api-access-t2rxt\") pod \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.204260 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-utilities\") pod \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\" (UID: \"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c\") " Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.205254 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-utilities" (OuterVolumeSpecName: "utilities") pod "ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" (UID: "ad685c8a-aa8f-45e2-8b66-cec1b99ad66c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.213427 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-kube-api-access-t2rxt" (OuterVolumeSpecName: "kube-api-access-t2rxt") pod "ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" (UID: "ad685c8a-aa8f-45e2-8b66-cec1b99ad66c"). InnerVolumeSpecName "kube-api-access-t2rxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.305609 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2rxt\" (UniqueName: \"kubernetes.io/projected/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-kube-api-access-t2rxt\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.305659 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.363033 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" (UID: "ad685c8a-aa8f-45e2-8b66-cec1b99ad66c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.406445 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.935368 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpb8w" event={"ID":"ad685c8a-aa8f-45e2-8b66-cec1b99ad66c","Type":"ContainerDied","Data":"b82c5ff920bad3d1fb1cfb7b095cc5b6eeea7012c8fe5fe02d23e5bcee83a4e3"} Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.935433 4932 scope.go:117] "RemoveContainer" containerID="53ee3918fc9dc4e6e758c65777cfa6ef8ed63817051b00b4c3ff3b796ac3c5d7" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.935521 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpb8w" Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.956499 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tpb8w"] Mar 21 09:13:23 crc kubenswrapper[4932]: I0321 09:13:23.965734 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tpb8w"] Mar 21 09:13:24 crc kubenswrapper[4932]: I0321 09:13:24.501601 4932 scope.go:117] "RemoveContainer" containerID="44e65a989f318a3ff77b9f512611c9dbd3d6df77f3c8b843d25fec10c725708e" Mar 21 09:13:24 crc kubenswrapper[4932]: I0321 09:13:24.575674 4932 scope.go:117] "RemoveContainer" containerID="a442cec9c37b8bcc745493f4ea98be30e8c8b1c303611d51ba2d56f02b0d8a57" Mar 21 09:13:24 crc kubenswrapper[4932]: I0321 09:13:24.945326 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" event={"ID":"7a2f426b-c3bb-4248-8a49-fba11b225c08","Type":"ContainerStarted","Data":"446ff997573e76c480bf584101090d3238d08e4219c2a938657703321e2ceb42"} Mar 21 09:13:24 crc kubenswrapper[4932]: I0321 09:13:24.945526 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:13:25 crc kubenswrapper[4932]: I0321 09:13:25.713829 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" path="/var/lib/kubelet/pods/ad685c8a-aa8f-45e2-8b66-cec1b99ad66c/volumes" Mar 21 09:13:26 crc kubenswrapper[4932]: I0321 09:13:26.959909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" event={"ID":"d361ba9f-cc61-4e3c-a206-0ac2bc5ac090","Type":"ContainerStarted","Data":"727f863d0b95b5de127382db40bd862cb47f02fdc8f2fc012217c39d4998021d"} Mar 21 09:13:26 crc kubenswrapper[4932]: I0321 09:13:26.960021 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:13:26 crc kubenswrapper[4932]: I0321 09:13:26.979949 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" podStartSLOduration=3.589049589 podStartE2EDuration="6.979924235s" podCreationTimestamp="2026-03-21 09:13:20 +0000 UTC" firstStartedPulling="2026-03-21 09:13:21.189127746 +0000 UTC m=+904.784326015" lastFinishedPulling="2026-03-21 09:13:24.580002392 +0000 UTC m=+908.175200661" observedRunningTime="2026-03-21 09:13:24.972853535 +0000 UTC m=+908.568051814" watchObservedRunningTime="2026-03-21 09:13:26.979924235 +0000 UTC m=+910.575122504" Mar 21 09:13:26 crc kubenswrapper[4932]: I0321 09:13:26.982284 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" podStartSLOduration=1.8716003749999999 podStartE2EDuration="6.982264158s" podCreationTimestamp="2026-03-21 09:13:20 +0000 UTC" firstStartedPulling="2026-03-21 09:13:21.627185826 +0000 UTC m=+905.222384095" lastFinishedPulling="2026-03-21 09:13:26.737849609 +0000 UTC m=+910.333047878" observedRunningTime="2026-03-21 09:13:26.978044636 +0000 UTC m=+910.573242905" watchObservedRunningTime="2026-03-21 09:13:26.982264158 +0000 UTC m=+910.577462427" Mar 21 09:13:41 crc kubenswrapper[4932]: I0321 09:13:41.182882 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58bf4cc6c7-jtsrl" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.151959 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568074-9rw7k"] Mar 21 09:14:00 crc kubenswrapper[4932]: E0321 09:14:00.153296 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerName="extract-content" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.153315 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerName="extract-content" Mar 21 09:14:00 crc kubenswrapper[4932]: E0321 09:14:00.153373 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerName="extract-utilities" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.153385 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerName="extract-utilities" Mar 21 09:14:00 crc kubenswrapper[4932]: E0321 09:14:00.153404 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerName="registry-server" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.153414 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerName="registry-server" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.153560 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad685c8a-aa8f-45e2-8b66-cec1b99ad66c" containerName="registry-server" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.154217 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.158786 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.158969 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.160072 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.170989 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568074-9rw7k"] Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.245632 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6z9k\" (UniqueName: \"kubernetes.io/projected/7ca09d16-95ed-48ef-b108-ed8a0e8c6477-kube-api-access-z6z9k\") pod \"auto-csr-approver-29568074-9rw7k\" (UID: \"7ca09d16-95ed-48ef-b108-ed8a0e8c6477\") " pod="openshift-infra/auto-csr-approver-29568074-9rw7k" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.346409 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6z9k\" (UniqueName: \"kubernetes.io/projected/7ca09d16-95ed-48ef-b108-ed8a0e8c6477-kube-api-access-z6z9k\") pod \"auto-csr-approver-29568074-9rw7k\" (UID: \"7ca09d16-95ed-48ef-b108-ed8a0e8c6477\") " pod="openshift-infra/auto-csr-approver-29568074-9rw7k" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.373939 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6z9k\" (UniqueName: \"kubernetes.io/projected/7ca09d16-95ed-48ef-b108-ed8a0e8c6477-kube-api-access-z6z9k\") pod \"auto-csr-approver-29568074-9rw7k\" (UID: \"7ca09d16-95ed-48ef-b108-ed8a0e8c6477\") " pod="openshift-infra/auto-csr-approver-29568074-9rw7k" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.485261 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.729471 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-89c599fbb-6tg96" Mar 21 09:14:00 crc kubenswrapper[4932]: I0321 09:14:00.903999 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568074-9rw7k"] Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.202393 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" event={"ID":"7ca09d16-95ed-48ef-b108-ed8a0e8c6477","Type":"ContainerStarted","Data":"0e1c614d8215e1b5f2f14ef9ae274c8f786fbf6682ec0dc5fe7fc776922b55a2"} Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.359888 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bkzvk"] Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.368067 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.371451 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lhb9l" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.371731 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.372612 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.397301 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6"] Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.399074 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.412802 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.417106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6"] Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.467461 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jzgr4"] Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.468704 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469124 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-2rjd6\" (UID: \"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469175 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469213 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzcpk\" (UniqueName: \"kubernetes.io/projected/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-kube-api-access-dzcpk\") pod \"frr-k8s-webhook-server-bcc4b6f68-2rjd6\" (UID: \"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469295 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnqjp\" (UniqueName: \"kubernetes.io/projected/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-kube-api-access-tnqjp\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469325 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics-certs\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469355 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-sockets\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469400 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-conf\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469434 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-reloader\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.469460 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-startup\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.480976 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.481181 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-n7hl7" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.482301 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.485741 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-s29rd"] Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.486692 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.488905 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.488909 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.502858 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-s29rd"] Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.569853 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e14cede-ac29-4c58-954c-43b97f2d2d0e-metrics-certs\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.569902 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-reloader\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.569927 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-startup\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.569949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-2rjd6\" (UID: \"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.569975 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570006 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzcpk\" (UniqueName: \"kubernetes.io/projected/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-kube-api-access-dzcpk\") pod \"frr-k8s-webhook-server-bcc4b6f68-2rjd6\" (UID: \"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570038 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570075 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4v8t\" (UniqueName: \"kubernetes.io/projected/6e14cede-ac29-4c58-954c-43b97f2d2d0e-kube-api-access-t4v8t\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570093 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-metrics-certs\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570118 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkht\" (UniqueName: \"kubernetes.io/projected/d489024b-f8ca-4976-aa77-a2809312901d-kube-api-access-fkkht\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570142 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnqjp\" (UniqueName: \"kubernetes.io/projected/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-kube-api-access-tnqjp\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570162 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e14cede-ac29-4c58-954c-43b97f2d2d0e-cert\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570187 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics-certs\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570206 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d489024b-f8ca-4976-aa77-a2809312901d-metallb-excludel2\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570229 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-sockets\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570265 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-conf\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.570798 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-conf\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.571023 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-reloader\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.571701 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-startup\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: E0321 09:14:01.571786 4932 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 21 09:14:01 crc kubenswrapper[4932]: E0321 09:14:01.571844 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-cert podName:36c9b068-cfad-4fb8-80e6-04eec1b1a4a6 nodeName:}" failed. No retries permitted until 2026-03-21 09:14:02.071829013 +0000 UTC m=+945.667027282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-cert") pod "frr-k8s-webhook-server-bcc4b6f68-2rjd6" (UID: "36c9b068-cfad-4fb8-80e6-04eec1b1a4a6") : secret "frr-k8s-webhook-server-cert" not found Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.572258 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: E0321 09:14:01.572641 4932 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 21 09:14:01 crc kubenswrapper[4932]: E0321 09:14:01.572676 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics-certs podName:6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0 nodeName:}" failed. No retries permitted until 2026-03-21 09:14:02.07266515 +0000 UTC m=+945.667863419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics-certs") pod "frr-k8s-bkzvk" (UID: "6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0") : secret "frr-k8s-certs-secret" not found Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.572887 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-frr-sockets\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.589493 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnqjp\" (UniqueName: \"kubernetes.io/projected/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-kube-api-access-tnqjp\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.589534 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzcpk\" (UniqueName: \"kubernetes.io/projected/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-kube-api-access-dzcpk\") pod \"frr-k8s-webhook-server-bcc4b6f68-2rjd6\" (UID: \"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.671222 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.671280 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4v8t\" (UniqueName: \"kubernetes.io/projected/6e14cede-ac29-4c58-954c-43b97f2d2d0e-kube-api-access-t4v8t\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.671296 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-metrics-certs\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.671323 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkkht\" (UniqueName: \"kubernetes.io/projected/d489024b-f8ca-4976-aa77-a2809312901d-kube-api-access-fkkht\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.671369 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e14cede-ac29-4c58-954c-43b97f2d2d0e-cert\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.671408 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d489024b-f8ca-4976-aa77-a2809312901d-metallb-excludel2\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: E0321 09:14:01.671421 4932 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.671462 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e14cede-ac29-4c58-954c-43b97f2d2d0e-metrics-certs\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: E0321 09:14:01.671489 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist podName:d489024b-f8ca-4976-aa77-a2809312901d nodeName:}" failed. No retries permitted until 2026-03-21 09:14:02.171470377 +0000 UTC m=+945.766668646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist") pod "speaker-jzgr4" (UID: "d489024b-f8ca-4976-aa77-a2809312901d") : secret "metallb-memberlist" not found Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.673101 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d489024b-f8ca-4976-aa77-a2809312901d-metallb-excludel2\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.677120 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-metrics-certs\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.677307 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e14cede-ac29-4c58-954c-43b97f2d2d0e-metrics-certs\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.679464 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6e14cede-ac29-4c58-954c-43b97f2d2d0e-cert\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.697916 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkkht\" (UniqueName: \"kubernetes.io/projected/d489024b-f8ca-4976-aa77-a2809312901d-kube-api-access-fkkht\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.704772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4v8t\" (UniqueName: \"kubernetes.io/projected/6e14cede-ac29-4c58-954c-43b97f2d2d0e-kube-api-access-t4v8t\") pod \"controller-7bb4cc7c98-s29rd\" (UID: \"6e14cede-ac29-4c58-954c-43b97f2d2d0e\") " pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:01 crc kubenswrapper[4932]: I0321 09:14:01.807455 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.055678 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-s29rd"] Mar 21 09:14:02 crc kubenswrapper[4932]: W0321 09:14:02.067243 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e14cede_ac29_4c58_954c_43b97f2d2d0e.slice/crio-d89fff33e2c2420db810c9604a7d3d4878522f0cd7e47cb4544f597853bc7efe WatchSource:0}: Error finding container d89fff33e2c2420db810c9604a7d3d4878522f0cd7e47cb4544f597853bc7efe: Status 404 returned error can't find the container with id d89fff33e2c2420db810c9604a7d3d4878522f0cd7e47cb4544f597853bc7efe Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.078291 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-2rjd6\" (UID: \"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.078429 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics-certs\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.084492 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/36c9b068-cfad-4fb8-80e6-04eec1b1a4a6-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-2rjd6\" (UID: \"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.085410 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0-metrics-certs\") pod \"frr-k8s-bkzvk\" (UID: \"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0\") " pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.179957 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:02 crc kubenswrapper[4932]: E0321 09:14:02.180138 4932 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 21 09:14:02 crc kubenswrapper[4932]: E0321 09:14:02.180189 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist podName:d489024b-f8ca-4976-aa77-a2809312901d nodeName:}" failed. No retries permitted until 2026-03-21 09:14:03.180175137 +0000 UTC m=+946.775373406 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist") pod "speaker-jzgr4" (UID: "d489024b-f8ca-4976-aa77-a2809312901d") : secret "metallb-memberlist" not found Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.209512 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" event={"ID":"7ca09d16-95ed-48ef-b108-ed8a0e8c6477","Type":"ContainerStarted","Data":"b75f965b3a33331d846a06b527ec365b61bb6ed9d6c987472e9f639d2c7424c5"} Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.210875 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-s29rd" event={"ID":"6e14cede-ac29-4c58-954c-43b97f2d2d0e","Type":"ContainerStarted","Data":"d89fff33e2c2420db810c9604a7d3d4878522f0cd7e47cb4544f597853bc7efe"} Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.224668 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" podStartSLOduration=1.429714362 podStartE2EDuration="2.224650437s" podCreationTimestamp="2026-03-21 09:14:00 +0000 UTC" firstStartedPulling="2026-03-21 09:14:00.912640971 +0000 UTC m=+944.507839230" lastFinishedPulling="2026-03-21 09:14:01.707577016 +0000 UTC m=+945.302775305" observedRunningTime="2026-03-21 09:14:02.221671864 +0000 UTC m=+945.816870133" watchObservedRunningTime="2026-03-21 09:14:02.224650437 +0000 UTC m=+945.819848706" Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.291826 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.316744 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:02 crc kubenswrapper[4932]: I0321 09:14:02.514840 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6"] Mar 21 09:14:02 crc kubenswrapper[4932]: W0321 09:14:02.523100 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36c9b068_cfad_4fb8_80e6_04eec1b1a4a6.slice/crio-3608bdab4b04d9d97201ae5a02742f7c3f1d67daced1418327641db15ee225ff WatchSource:0}: Error finding container 3608bdab4b04d9d97201ae5a02742f7c3f1d67daced1418327641db15ee225ff: Status 404 returned error can't find the container with id 3608bdab4b04d9d97201ae5a02742f7c3f1d67daced1418327641db15ee225ff Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.193618 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.198962 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d489024b-f8ca-4976-aa77-a2809312901d-memberlist\") pod \"speaker-jzgr4\" (UID: \"d489024b-f8ca-4976-aa77-a2809312901d\") " pod="metallb-system/speaker-jzgr4" Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.220728 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" event={"ID":"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6","Type":"ContainerStarted","Data":"3608bdab4b04d9d97201ae5a02742f7c3f1d67daced1418327641db15ee225ff"} Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.222489 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-s29rd" event={"ID":"6e14cede-ac29-4c58-954c-43b97f2d2d0e","Type":"ContainerStarted","Data":"0b1cd839b872e66e7b13dc2e005453a83a05d768b05a072a7cb4d289a5a34198"} Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.222535 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-s29rd" event={"ID":"6e14cede-ac29-4c58-954c-43b97f2d2d0e","Type":"ContainerStarted","Data":"4851d12cebc284e7f136a1c58d6263e28e4b897eb942149be76e6ea0444ecf26"} Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.222608 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.224543 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerStarted","Data":"3276db01f90c4abfeabded750db069d2182e91fdf4716d6e2181a023e0a13446"} Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.225923 4932 generic.go:334] "Generic (PLEG): container finished" podID="7ca09d16-95ed-48ef-b108-ed8a0e8c6477" containerID="b75f965b3a33331d846a06b527ec365b61bb6ed9d6c987472e9f639d2c7424c5" exitCode=0 Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.225953 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" event={"ID":"7ca09d16-95ed-48ef-b108-ed8a0e8c6477","Type":"ContainerDied","Data":"b75f965b3a33331d846a06b527ec365b61bb6ed9d6c987472e9f639d2c7424c5"} Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.244244 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-s29rd" podStartSLOduration=2.244216183 podStartE2EDuration="2.244216183s" podCreationTimestamp="2026-03-21 09:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:14:03.239068151 +0000 UTC m=+946.834266420" watchObservedRunningTime="2026-03-21 09:14:03.244216183 +0000 UTC m=+946.839414452" Mar 21 09:14:03 crc kubenswrapper[4932]: I0321 09:14:03.289185 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jzgr4" Mar 21 09:14:03 crc kubenswrapper[4932]: W0321 09:14:03.314783 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd489024b_f8ca_4976_aa77_a2809312901d.slice/crio-72c394426d2c431f38c5298ee2092b372d89881495dd690a6198ec3a2e456ea7 WatchSource:0}: Error finding container 72c394426d2c431f38c5298ee2092b372d89881495dd690a6198ec3a2e456ea7: Status 404 returned error can't find the container with id 72c394426d2c431f38c5298ee2092b372d89881495dd690a6198ec3a2e456ea7 Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.251701 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jzgr4" event={"ID":"d489024b-f8ca-4976-aa77-a2809312901d","Type":"ContainerStarted","Data":"31adce1d77ca94046bf57e5aa2aeb316642820ae04a7e1a0e3298bd5c2bead52"} Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.252062 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jzgr4" event={"ID":"d489024b-f8ca-4976-aa77-a2809312901d","Type":"ContainerStarted","Data":"522cd77c53084826216b221d64d97b928771029693520d2647e1e080edd0a32f"} Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.252078 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jzgr4" event={"ID":"d489024b-f8ca-4976-aa77-a2809312901d","Type":"ContainerStarted","Data":"72c394426d2c431f38c5298ee2092b372d89881495dd690a6198ec3a2e456ea7"} Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.252615 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jzgr4" Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.290295 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jzgr4" podStartSLOduration=3.290277677 podStartE2EDuration="3.290277677s" podCreationTimestamp="2026-03-21 09:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:14:04.289539173 +0000 UTC m=+947.884737442" watchObservedRunningTime="2026-03-21 09:14:04.290277677 +0000 UTC m=+947.885475946" Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.607098 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.718652 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6z9k\" (UniqueName: \"kubernetes.io/projected/7ca09d16-95ed-48ef-b108-ed8a0e8c6477-kube-api-access-z6z9k\") pod \"7ca09d16-95ed-48ef-b108-ed8a0e8c6477\" (UID: \"7ca09d16-95ed-48ef-b108-ed8a0e8c6477\") " Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.725598 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca09d16-95ed-48ef-b108-ed8a0e8c6477-kube-api-access-z6z9k" (OuterVolumeSpecName: "kube-api-access-z6z9k") pod "7ca09d16-95ed-48ef-b108-ed8a0e8c6477" (UID: "7ca09d16-95ed-48ef-b108-ed8a0e8c6477"). InnerVolumeSpecName "kube-api-access-z6z9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:14:04 crc kubenswrapper[4932]: I0321 09:14:04.820540 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6z9k\" (UniqueName: \"kubernetes.io/projected/7ca09d16-95ed-48ef-b108-ed8a0e8c6477-kube-api-access-z6z9k\") on node \"crc\" DevicePath \"\"" Mar 21 09:14:05 crc kubenswrapper[4932]: I0321 09:14:05.268281 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" Mar 21 09:14:05 crc kubenswrapper[4932]: I0321 09:14:05.268284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568074-9rw7k" event={"ID":"7ca09d16-95ed-48ef-b108-ed8a0e8c6477","Type":"ContainerDied","Data":"0e1c614d8215e1b5f2f14ef9ae274c8f786fbf6682ec0dc5fe7fc776922b55a2"} Mar 21 09:14:05 crc kubenswrapper[4932]: I0321 09:14:05.268377 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1c614d8215e1b5f2f14ef9ae274c8f786fbf6682ec0dc5fe7fc776922b55a2" Mar 21 09:14:05 crc kubenswrapper[4932]: I0321 09:14:05.305245 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568068-zdq5z"] Mar 21 09:14:05 crc kubenswrapper[4932]: I0321 09:14:05.309147 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568068-zdq5z"] Mar 21 09:14:05 crc kubenswrapper[4932]: I0321 09:14:05.712084 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e36e90-607d-4a4d-9736-ab89fe133c9a" path="/var/lib/kubelet/pods/50e36e90-607d-4a4d-9736-ab89fe133c9a/volumes" Mar 21 09:14:10 crc kubenswrapper[4932]: I0321 09:14:10.335290 4932 generic.go:334] "Generic (PLEG): container finished" podID="6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0" containerID="bccb1197cb35646f97deac642884fa236239a7066e5c8786ff7bfe75aa64c572" exitCode=0 Mar 21 09:14:10 crc kubenswrapper[4932]: I0321 09:14:10.335343 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerDied","Data":"bccb1197cb35646f97deac642884fa236239a7066e5c8786ff7bfe75aa64c572"} Mar 21 09:14:10 crc kubenswrapper[4932]: I0321 09:14:10.337641 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" event={"ID":"36c9b068-cfad-4fb8-80e6-04eec1b1a4a6","Type":"ContainerStarted","Data":"c87976dd1a8116c3dc003650498a740e5f73e36604159a6c5bcd932d3d06f1cf"} Mar 21 09:14:10 crc kubenswrapper[4932]: I0321 09:14:10.409670 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" podStartSLOduration=2.12446082 podStartE2EDuration="9.409651104s" podCreationTimestamp="2026-03-21 09:14:01 +0000 UTC" firstStartedPulling="2026-03-21 09:14:02.526110809 +0000 UTC m=+946.121309078" lastFinishedPulling="2026-03-21 09:14:09.811301073 +0000 UTC m=+953.406499362" observedRunningTime="2026-03-21 09:14:10.407433474 +0000 UTC m=+954.002631753" watchObservedRunningTime="2026-03-21 09:14:10.409651104 +0000 UTC m=+954.004849383" Mar 21 09:14:11 crc kubenswrapper[4932]: I0321 09:14:11.349958 4932 generic.go:334] "Generic (PLEG): container finished" podID="6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0" containerID="9997091374e936bbc69699688aa4bef46d8e0b122d0b65896cf2f078daedfceb" exitCode=0 Mar 21 09:14:11 crc kubenswrapper[4932]: I0321 09:14:11.351525 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerDied","Data":"9997091374e936bbc69699688aa4bef46d8e0b122d0b65896cf2f078daedfceb"} Mar 21 09:14:11 crc kubenswrapper[4932]: I0321 09:14:11.351594 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:12 crc kubenswrapper[4932]: I0321 09:14:12.359799 4932 generic.go:334] "Generic (PLEG): container finished" podID="6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0" containerID="03c34b63b96e1d23e5545e1c6161acee4263d1784e202ba4def874c611e5d665" exitCode=0 Mar 21 09:14:12 crc kubenswrapper[4932]: I0321 09:14:12.359861 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerDied","Data":"03c34b63b96e1d23e5545e1c6161acee4263d1784e202ba4def874c611e5d665"} Mar 21 09:14:13 crc kubenswrapper[4932]: I0321 09:14:13.295626 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jzgr4" Mar 21 09:14:13 crc kubenswrapper[4932]: I0321 09:14:13.375341 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerStarted","Data":"3e36146d98226a06bf798da6fa2e94c225ec9f0fe9f127c03429bccc1870b0a9"} Mar 21 09:14:13 crc kubenswrapper[4932]: I0321 09:14:13.375418 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerStarted","Data":"d4ea44b9bd49e8c8445c37be46712122c4a429f233fd8ae82299e9682eaa4525"} Mar 21 09:14:13 crc kubenswrapper[4932]: I0321 09:14:13.375429 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerStarted","Data":"5f23753f3990cf3de095ff06386f311944cd06249b13d54d27ac9eaf95082213"} Mar 21 09:14:13 crc kubenswrapper[4932]: I0321 09:14:13.375440 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerStarted","Data":"c2097ab0ebb5ff41a43b843ab60b0579003609de541ff354bd105df1f17c9d92"} Mar 21 09:14:14 crc kubenswrapper[4932]: I0321 09:14:14.387046 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerStarted","Data":"44563c6c7f252f73708ed3b07f7e868ac4dbce00818f984969c0bd205e6c5a0b"} Mar 21 09:14:14 crc kubenswrapper[4932]: I0321 09:14:14.387325 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bkzvk" event={"ID":"6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0","Type":"ContainerStarted","Data":"5b409f41744c2b630ea09dc657caf0c75820fb743cbebecac7713186be685bbc"} Mar 21 09:14:14 crc kubenswrapper[4932]: I0321 09:14:14.387453 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:14 crc kubenswrapper[4932]: I0321 09:14:14.409986 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bkzvk" podStartSLOduration=5.994677272 podStartE2EDuration="13.409966802s" podCreationTimestamp="2026-03-21 09:14:01 +0000 UTC" firstStartedPulling="2026-03-21 09:14:02.37162271 +0000 UTC m=+945.966820979" lastFinishedPulling="2026-03-21 09:14:09.78691224 +0000 UTC m=+953.382110509" observedRunningTime="2026-03-21 09:14:14.405646308 +0000 UTC m=+958.000844597" watchObservedRunningTime="2026-03-21 09:14:14.409966802 +0000 UTC m=+958.005165071" Mar 21 09:14:15 crc kubenswrapper[4932]: I0321 09:14:15.947800 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x989z"] Mar 21 09:14:15 crc kubenswrapper[4932]: E0321 09:14:15.948112 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca09d16-95ed-48ef-b108-ed8a0e8c6477" containerName="oc" Mar 21 09:14:15 crc kubenswrapper[4932]: I0321 09:14:15.948127 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca09d16-95ed-48ef-b108-ed8a0e8c6477" containerName="oc" Mar 21 09:14:15 crc kubenswrapper[4932]: I0321 09:14:15.948235 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca09d16-95ed-48ef-b108-ed8a0e8c6477" containerName="oc" Mar 21 09:14:15 crc kubenswrapper[4932]: I0321 09:14:15.948686 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x989z" Mar 21 09:14:15 crc kubenswrapper[4932]: I0321 09:14:15.953688 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lkftd" Mar 21 09:14:15 crc kubenswrapper[4932]: I0321 09:14:15.953835 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 21 09:14:15 crc kubenswrapper[4932]: I0321 09:14:15.954092 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 21 09:14:15 crc kubenswrapper[4932]: I0321 09:14:15.963288 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x989z"] Mar 21 09:14:16 crc kubenswrapper[4932]: I0321 09:14:16.086908 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnh4g\" (UniqueName: \"kubernetes.io/projected/4b3d0afb-8a60-416e-ba8f-3f13fcf91c61-kube-api-access-mnh4g\") pod \"openstack-operator-index-x989z\" (UID: \"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61\") " pod="openstack-operators/openstack-operator-index-x989z" Mar 21 09:14:16 crc kubenswrapper[4932]: I0321 09:14:16.187663 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnh4g\" (UniqueName: \"kubernetes.io/projected/4b3d0afb-8a60-416e-ba8f-3f13fcf91c61-kube-api-access-mnh4g\") pod \"openstack-operator-index-x989z\" (UID: \"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61\") " pod="openstack-operators/openstack-operator-index-x989z" Mar 21 09:14:16 crc kubenswrapper[4932]: I0321 09:14:16.213953 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnh4g\" (UniqueName: \"kubernetes.io/projected/4b3d0afb-8a60-416e-ba8f-3f13fcf91c61-kube-api-access-mnh4g\") pod \"openstack-operator-index-x989z\" (UID: \"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61\") " pod="openstack-operators/openstack-operator-index-x989z" Mar 21 09:14:16 crc kubenswrapper[4932]: I0321 09:14:16.266745 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x989z" Mar 21 09:14:16 crc kubenswrapper[4932]: I0321 09:14:16.504154 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x989z"] Mar 21 09:14:16 crc kubenswrapper[4932]: W0321 09:14:16.532858 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b3d0afb_8a60_416e_ba8f_3f13fcf91c61.slice/crio-6cff2fbda069e5740b642c58a975d633c6174775cd31ce959f60a80a7b5015a7 WatchSource:0}: Error finding container 6cff2fbda069e5740b642c58a975d633c6174775cd31ce959f60a80a7b5015a7: Status 404 returned error can't find the container with id 6cff2fbda069e5740b642c58a975d633c6174775cd31ce959f60a80a7b5015a7 Mar 21 09:14:17 crc kubenswrapper[4932]: I0321 09:14:17.292513 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:17 crc kubenswrapper[4932]: I0321 09:14:17.329448 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:17 crc kubenswrapper[4932]: I0321 09:14:17.407984 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x989z" event={"ID":"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61","Type":"ContainerStarted","Data":"6cff2fbda069e5740b642c58a975d633c6174775cd31ce959f60a80a7b5015a7"} Mar 21 09:14:19 crc kubenswrapper[4932]: I0321 09:14:19.326686 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x989z"] Mar 21 09:14:19 crc kubenswrapper[4932]: I0321 09:14:19.809375 4932 scope.go:117] "RemoveContainer" containerID="e741ac433bf70b841f9c844875a94b5cea4c853e26f8aef7bd947bad3f799475" Mar 21 09:14:19 crc kubenswrapper[4932]: I0321 09:14:19.935846 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bspz8"] Mar 21 09:14:19 crc kubenswrapper[4932]: I0321 09:14:19.936824 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:19 crc kubenswrapper[4932]: I0321 09:14:19.945577 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bspz8"] Mar 21 09:14:20 crc kubenswrapper[4932]: I0321 09:14:20.040782 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-httcm\" (UniqueName: \"kubernetes.io/projected/99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e-kube-api-access-httcm\") pod \"openstack-operator-index-bspz8\" (UID: \"99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e\") " pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:20 crc kubenswrapper[4932]: I0321 09:14:20.142278 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-httcm\" (UniqueName: \"kubernetes.io/projected/99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e-kube-api-access-httcm\") pod \"openstack-operator-index-bspz8\" (UID: \"99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e\") " pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:20 crc kubenswrapper[4932]: I0321 09:14:20.160041 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-httcm\" (UniqueName: \"kubernetes.io/projected/99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e-kube-api-access-httcm\") pod \"openstack-operator-index-bspz8\" (UID: \"99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e\") " pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:20 crc kubenswrapper[4932]: I0321 09:14:20.266561 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.105711 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bspz8"] Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.434817 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x989z" event={"ID":"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61","Type":"ContainerStarted","Data":"9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7"} Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.434876 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-x989z" podUID="4b3d0afb-8a60-416e-ba8f-3f13fcf91c61" containerName="registry-server" containerID="cri-o://9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7" gracePeriod=2 Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.436628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bspz8" event={"ID":"99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e","Type":"ContainerStarted","Data":"029c9a22827a5134321b262c65c6e345f935b7343dba8f911abee0ecf7edf755"} Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.436664 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bspz8" event={"ID":"99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e","Type":"ContainerStarted","Data":"cbf41466d38d9b960da259773d3bbd96fb2d679560d03e2061d8020c86cc3f65"} Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.451802 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x989z" podStartSLOduration=2.228150212 podStartE2EDuration="6.4517842s" podCreationTimestamp="2026-03-21 09:14:15 +0000 UTC" firstStartedPulling="2026-03-21 09:14:16.542853704 +0000 UTC m=+960.138051973" lastFinishedPulling="2026-03-21 09:14:20.766487692 +0000 UTC m=+964.361685961" observedRunningTime="2026-03-21 09:14:21.450449948 +0000 UTC m=+965.045648217" watchObservedRunningTime="2026-03-21 09:14:21.4517842 +0000 UTC m=+965.046982489" Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.466599 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bspz8" podStartSLOduration=2.410910653 podStartE2EDuration="2.466573583s" podCreationTimestamp="2026-03-21 09:14:19 +0000 UTC" firstStartedPulling="2026-03-21 09:14:21.112511986 +0000 UTC m=+964.707710255" lastFinishedPulling="2026-03-21 09:14:21.168174916 +0000 UTC m=+964.763373185" observedRunningTime="2026-03-21 09:14:21.462280368 +0000 UTC m=+965.057478637" watchObservedRunningTime="2026-03-21 09:14:21.466573583 +0000 UTC m=+965.061771852" Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.803451 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x989z" Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.813656 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-s29rd" Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.863965 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnh4g\" (UniqueName: \"kubernetes.io/projected/4b3d0afb-8a60-416e-ba8f-3f13fcf91c61-kube-api-access-mnh4g\") pod \"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61\" (UID: \"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61\") " Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.869533 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b3d0afb-8a60-416e-ba8f-3f13fcf91c61-kube-api-access-mnh4g" (OuterVolumeSpecName: "kube-api-access-mnh4g") pod "4b3d0afb-8a60-416e-ba8f-3f13fcf91c61" (UID: "4b3d0afb-8a60-416e-ba8f-3f13fcf91c61"). InnerVolumeSpecName "kube-api-access-mnh4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:14:21 crc kubenswrapper[4932]: I0321 09:14:21.965892 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnh4g\" (UniqueName: \"kubernetes.io/projected/4b3d0afb-8a60-416e-ba8f-3f13fcf91c61-kube-api-access-mnh4g\") on node \"crc\" DevicePath \"\"" Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.295276 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bkzvk" Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.330117 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-2rjd6" Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.443615 4932 generic.go:334] "Generic (PLEG): container finished" podID="4b3d0afb-8a60-416e-ba8f-3f13fcf91c61" containerID="9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7" exitCode=0 Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.443748 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x989z" Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.443867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x989z" event={"ID":"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61","Type":"ContainerDied","Data":"9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7"} Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.443903 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x989z" event={"ID":"4b3d0afb-8a60-416e-ba8f-3f13fcf91c61","Type":"ContainerDied","Data":"6cff2fbda069e5740b642c58a975d633c6174775cd31ce959f60a80a7b5015a7"} Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.443920 4932 scope.go:117] "RemoveContainer" containerID="9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7" Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.466697 4932 scope.go:117] "RemoveContainer" containerID="9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7" Mar 21 09:14:22 crc kubenswrapper[4932]: E0321 09:14:22.467128 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7\": container with ID starting with 9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7 not found: ID does not exist" containerID="9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7" Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.467156 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7"} err="failed to get container status \"9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7\": rpc error: code = NotFound desc = could not find container \"9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7\": container with ID starting with 9cbe45a7f085ced0db9bf903d973cc24454732205bf0cd3b74204b15cba7deb7 not found: ID does not exist" Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.475755 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x989z"] Mar 21 09:14:22 crc kubenswrapper[4932]: I0321 09:14:22.483631 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-x989z"] Mar 21 09:14:23 crc kubenswrapper[4932]: I0321 09:14:23.714450 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b3d0afb-8a60-416e-ba8f-3f13fcf91c61" path="/var/lib/kubelet/pods/4b3d0afb-8a60-416e-ba8f-3f13fcf91c61/volumes" Mar 21 09:14:30 crc kubenswrapper[4932]: I0321 09:14:30.267176 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:30 crc kubenswrapper[4932]: I0321 09:14:30.268002 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:30 crc kubenswrapper[4932]: I0321 09:14:30.306528 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:30 crc kubenswrapper[4932]: I0321 09:14:30.578787 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-bspz8" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.674526 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v"] Mar 21 09:14:36 crc kubenswrapper[4932]: E0321 09:14:36.675322 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3d0afb-8a60-416e-ba8f-3f13fcf91c61" containerName="registry-server" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.675338 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3d0afb-8a60-416e-ba8f-3f13fcf91c61" containerName="registry-server" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.675535 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b3d0afb-8a60-416e-ba8f-3f13fcf91c61" containerName="registry-server" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.676646 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.678726 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-s6v7n" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.683048 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v"] Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.795318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-util\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.795763 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncpbt\" (UniqueName: \"kubernetes.io/projected/6bc25096-fa48-42e9-9984-d44fbe344949-kube-api-access-ncpbt\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.795810 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-bundle\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.897215 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-util\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.897305 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncpbt\" (UniqueName: \"kubernetes.io/projected/6bc25096-fa48-42e9-9984-d44fbe344949-kube-api-access-ncpbt\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.897375 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-bundle\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.898015 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-util\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.898240 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-bundle\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:36 crc kubenswrapper[4932]: I0321 09:14:36.924143 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncpbt\" (UniqueName: \"kubernetes.io/projected/6bc25096-fa48-42e9-9984-d44fbe344949-kube-api-access-ncpbt\") pod \"c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:37 crc kubenswrapper[4932]: I0321 09:14:37.015766 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:37 crc kubenswrapper[4932]: W0321 09:14:37.411395 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bc25096_fa48_42e9_9984_d44fbe344949.slice/crio-ddf7cca8f4582a77bf22b7560394892767f2bf3a247afbfbcff6b12183aef7eb WatchSource:0}: Error finding container ddf7cca8f4582a77bf22b7560394892767f2bf3a247afbfbcff6b12183aef7eb: Status 404 returned error can't find the container with id ddf7cca8f4582a77bf22b7560394892767f2bf3a247afbfbcff6b12183aef7eb Mar 21 09:14:37 crc kubenswrapper[4932]: I0321 09:14:37.412853 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v"] Mar 21 09:14:37 crc kubenswrapper[4932]: I0321 09:14:37.548111 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" event={"ID":"6bc25096-fa48-42e9-9984-d44fbe344949","Type":"ContainerStarted","Data":"ddf7cca8f4582a77bf22b7560394892767f2bf3a247afbfbcff6b12183aef7eb"} Mar 21 09:14:38 crc kubenswrapper[4932]: I0321 09:14:38.556828 4932 generic.go:334] "Generic (PLEG): container finished" podID="6bc25096-fa48-42e9-9984-d44fbe344949" containerID="feeb87590566661e74cfbeb445293f16fc5d76a1f9cc025c75151563a8d5d0f8" exitCode=0 Mar 21 09:14:38 crc kubenswrapper[4932]: I0321 09:14:38.556887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" event={"ID":"6bc25096-fa48-42e9-9984-d44fbe344949","Type":"ContainerDied","Data":"feeb87590566661e74cfbeb445293f16fc5d76a1f9cc025c75151563a8d5d0f8"} Mar 21 09:14:39 crc kubenswrapper[4932]: I0321 09:14:39.565418 4932 generic.go:334] "Generic (PLEG): container finished" podID="6bc25096-fa48-42e9-9984-d44fbe344949" containerID="b71faac91a7750ea381da93f1aa482a625dea47995ab58cd21411b1d8b0dff01" exitCode=0 Mar 21 09:14:39 crc kubenswrapper[4932]: I0321 09:14:39.565511 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" event={"ID":"6bc25096-fa48-42e9-9984-d44fbe344949","Type":"ContainerDied","Data":"b71faac91a7750ea381da93f1aa482a625dea47995ab58cd21411b1d8b0dff01"} Mar 21 09:14:40 crc kubenswrapper[4932]: I0321 09:14:40.576446 4932 generic.go:334] "Generic (PLEG): container finished" podID="6bc25096-fa48-42e9-9984-d44fbe344949" containerID="dc5fa3221b257064aef9449989d4a672e9b2b02a6fa6e70ff5a3c060d157cc66" exitCode=0 Mar 21 09:14:40 crc kubenswrapper[4932]: I0321 09:14:40.576616 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" event={"ID":"6bc25096-fa48-42e9-9984-d44fbe344949","Type":"ContainerDied","Data":"dc5fa3221b257064aef9449989d4a672e9b2b02a6fa6e70ff5a3c060d157cc66"} Mar 21 09:14:41 crc kubenswrapper[4932]: I0321 09:14:41.893612 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:41 crc kubenswrapper[4932]: I0321 09:14:41.968305 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncpbt\" (UniqueName: \"kubernetes.io/projected/6bc25096-fa48-42e9-9984-d44fbe344949-kube-api-access-ncpbt\") pod \"6bc25096-fa48-42e9-9984-d44fbe344949\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " Mar 21 09:14:41 crc kubenswrapper[4932]: I0321 09:14:41.968395 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-bundle\") pod \"6bc25096-fa48-42e9-9984-d44fbe344949\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " Mar 21 09:14:41 crc kubenswrapper[4932]: I0321 09:14:41.968437 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-util\") pod \"6bc25096-fa48-42e9-9984-d44fbe344949\" (UID: \"6bc25096-fa48-42e9-9984-d44fbe344949\") " Mar 21 09:14:41 crc kubenswrapper[4932]: I0321 09:14:41.968972 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-bundle" (OuterVolumeSpecName: "bundle") pod "6bc25096-fa48-42e9-9984-d44fbe344949" (UID: "6bc25096-fa48-42e9-9984-d44fbe344949"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:14:41 crc kubenswrapper[4932]: I0321 09:14:41.973559 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc25096-fa48-42e9-9984-d44fbe344949-kube-api-access-ncpbt" (OuterVolumeSpecName: "kube-api-access-ncpbt") pod "6bc25096-fa48-42e9-9984-d44fbe344949" (UID: "6bc25096-fa48-42e9-9984-d44fbe344949"). InnerVolumeSpecName "kube-api-access-ncpbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:14:41 crc kubenswrapper[4932]: I0321 09:14:41.983630 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-util" (OuterVolumeSpecName: "util") pod "6bc25096-fa48-42e9-9984-d44fbe344949" (UID: "6bc25096-fa48-42e9-9984-d44fbe344949"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:14:42 crc kubenswrapper[4932]: I0321 09:14:42.070340 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-util\") on node \"crc\" DevicePath \"\"" Mar 21 09:14:42 crc kubenswrapper[4932]: I0321 09:14:42.070413 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncpbt\" (UniqueName: \"kubernetes.io/projected/6bc25096-fa48-42e9-9984-d44fbe344949-kube-api-access-ncpbt\") on node \"crc\" DevicePath \"\"" Mar 21 09:14:42 crc kubenswrapper[4932]: I0321 09:14:42.070427 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bc25096-fa48-42e9-9984-d44fbe344949-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:14:42 crc kubenswrapper[4932]: I0321 09:14:42.595985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" event={"ID":"6bc25096-fa48-42e9-9984-d44fbe344949","Type":"ContainerDied","Data":"ddf7cca8f4582a77bf22b7560394892767f2bf3a247afbfbcff6b12183aef7eb"} Mar 21 09:14:42 crc kubenswrapper[4932]: I0321 09:14:42.596029 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddf7cca8f4582a77bf22b7560394892767f2bf3a247afbfbcff6b12183aef7eb" Mar 21 09:14:42 crc kubenswrapper[4932]: I0321 09:14:42.596126 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v" Mar 21 09:14:48 crc kubenswrapper[4932]: I0321 09:14:48.947608 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj"] Mar 21 09:14:48 crc kubenswrapper[4932]: E0321 09:14:48.948465 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc25096-fa48-42e9-9984-d44fbe344949" containerName="pull" Mar 21 09:14:48 crc kubenswrapper[4932]: I0321 09:14:48.948480 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc25096-fa48-42e9-9984-d44fbe344949" containerName="pull" Mar 21 09:14:48 crc kubenswrapper[4932]: E0321 09:14:48.948495 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc25096-fa48-42e9-9984-d44fbe344949" containerName="extract" Mar 21 09:14:48 crc kubenswrapper[4932]: I0321 09:14:48.948503 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc25096-fa48-42e9-9984-d44fbe344949" containerName="extract" Mar 21 09:14:48 crc kubenswrapper[4932]: E0321 09:14:48.948517 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc25096-fa48-42e9-9984-d44fbe344949" containerName="util" Mar 21 09:14:48 crc kubenswrapper[4932]: I0321 09:14:48.948524 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc25096-fa48-42e9-9984-d44fbe344949" containerName="util" Mar 21 09:14:48 crc kubenswrapper[4932]: I0321 09:14:48.948656 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc25096-fa48-42e9-9984-d44fbe344949" containerName="extract" Mar 21 09:14:48 crc kubenswrapper[4932]: I0321 09:14:48.949185 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" Mar 21 09:14:48 crc kubenswrapper[4932]: I0321 09:14:48.955258 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-jjw89" Mar 21 09:14:49 crc kubenswrapper[4932]: I0321 09:14:49.025931 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj"] Mar 21 09:14:49 crc kubenswrapper[4932]: I0321 09:14:49.076002 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzmdq\" (UniqueName: \"kubernetes.io/projected/5e38ddbb-6b03-4b21-8d53-21852227d6bf-kube-api-access-bzmdq\") pod \"openstack-operator-controller-init-5787cd5f5b-j28mj\" (UID: \"5e38ddbb-6b03-4b21-8d53-21852227d6bf\") " pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" Mar 21 09:14:49 crc kubenswrapper[4932]: I0321 09:14:49.177032 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzmdq\" (UniqueName: \"kubernetes.io/projected/5e38ddbb-6b03-4b21-8d53-21852227d6bf-kube-api-access-bzmdq\") pod \"openstack-operator-controller-init-5787cd5f5b-j28mj\" (UID: \"5e38ddbb-6b03-4b21-8d53-21852227d6bf\") " pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" Mar 21 09:14:49 crc kubenswrapper[4932]: I0321 09:14:49.195675 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzmdq\" (UniqueName: \"kubernetes.io/projected/5e38ddbb-6b03-4b21-8d53-21852227d6bf-kube-api-access-bzmdq\") pod \"openstack-operator-controller-init-5787cd5f5b-j28mj\" (UID: \"5e38ddbb-6b03-4b21-8d53-21852227d6bf\") " pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" Mar 21 09:14:49 crc kubenswrapper[4932]: I0321 09:14:49.270520 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" Mar 21 09:14:49 crc kubenswrapper[4932]: I0321 09:14:49.528105 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj"] Mar 21 09:14:49 crc kubenswrapper[4932]: I0321 09:14:49.643947 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" event={"ID":"5e38ddbb-6b03-4b21-8d53-21852227d6bf","Type":"ContainerStarted","Data":"1144451e2d96274129647b4021cad151e42d47626a97d86d1ccfa4782b8c5f8d"} Mar 21 09:14:54 crc kubenswrapper[4932]: I0321 09:14:54.677799 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" event={"ID":"5e38ddbb-6b03-4b21-8d53-21852227d6bf","Type":"ContainerStarted","Data":"6ef417dfcac763a4ca8d0940ab44acdc1802818a51b692b73ea23794c4984d56"} Mar 21 09:14:54 crc kubenswrapper[4932]: I0321 09:14:54.678114 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" Mar 21 09:14:54 crc kubenswrapper[4932]: I0321 09:14:54.705819 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" podStartSLOduration=2.33185606 podStartE2EDuration="6.705801992s" podCreationTimestamp="2026-03-21 09:14:48 +0000 UTC" firstStartedPulling="2026-03-21 09:14:49.547605279 +0000 UTC m=+993.142803538" lastFinishedPulling="2026-03-21 09:14:53.921551201 +0000 UTC m=+997.516749470" observedRunningTime="2026-03-21 09:14:54.70449533 +0000 UTC m=+998.299693609" watchObservedRunningTime="2026-03-21 09:14:54.705801992 +0000 UTC m=+998.301000261" Mar 21 09:14:59 crc kubenswrapper[4932]: I0321 09:14:59.272795 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5787cd5f5b-j28mj" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.145258 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh"] Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.146596 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.149246 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.151993 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.162928 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh"] Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.225554 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.225615 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.233890 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7kqj\" (UniqueName: \"kubernetes.io/projected/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-kube-api-access-d7kqj\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.234006 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-config-volume\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.234044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-secret-volume\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.335244 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-config-volume\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.335303 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-secret-volume\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.335369 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7kqj\" (UniqueName: \"kubernetes.io/projected/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-kube-api-access-d7kqj\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.336339 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-config-volume\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.345027 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-secret-volume\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.355132 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7kqj\" (UniqueName: \"kubernetes.io/projected/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-kube-api-access-d7kqj\") pod \"collect-profiles-29568075-2kfkh\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.467613 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:00 crc kubenswrapper[4932]: I0321 09:15:00.882041 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh"] Mar 21 09:15:00 crc kubenswrapper[4932]: W0321 09:15:00.905689 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae52ce4a_9d6e_4032_ad79_67343a8cd2db.slice/crio-8b38973239cc462cde68029875ceda1b6b97b78aadc30546a1cdf68cf3733c14 WatchSource:0}: Error finding container 8b38973239cc462cde68029875ceda1b6b97b78aadc30546a1cdf68cf3733c14: Status 404 returned error can't find the container with id 8b38973239cc462cde68029875ceda1b6b97b78aadc30546a1cdf68cf3733c14 Mar 21 09:15:01 crc kubenswrapper[4932]: I0321 09:15:01.726481 4932 generic.go:334] "Generic (PLEG): container finished" podID="ae52ce4a-9d6e-4032-ad79-67343a8cd2db" containerID="8cd598d2692fb3fe5cde4d23e428e67a79bb9376ad4bd8b567c339be8230878b" exitCode=0 Mar 21 09:15:01 crc kubenswrapper[4932]: I0321 09:15:01.726531 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" event={"ID":"ae52ce4a-9d6e-4032-ad79-67343a8cd2db","Type":"ContainerDied","Data":"8cd598d2692fb3fe5cde4d23e428e67a79bb9376ad4bd8b567c339be8230878b"} Mar 21 09:15:01 crc kubenswrapper[4932]: I0321 09:15:01.726832 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" event={"ID":"ae52ce4a-9d6e-4032-ad79-67343a8cd2db","Type":"ContainerStarted","Data":"8b38973239cc462cde68029875ceda1b6b97b78aadc30546a1cdf68cf3733c14"} Mar 21 09:15:02 crc kubenswrapper[4932]: I0321 09:15:02.968646 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:02 crc kubenswrapper[4932]: I0321 09:15:02.972137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-secret-volume\") pod \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " Mar 21 09:15:02 crc kubenswrapper[4932]: I0321 09:15:02.972261 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-config-volume\") pod \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " Mar 21 09:15:02 crc kubenswrapper[4932]: I0321 09:15:02.973532 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae52ce4a-9d6e-4032-ad79-67343a8cd2db" (UID: "ae52ce4a-9d6e-4032-ad79-67343a8cd2db"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:15:02 crc kubenswrapper[4932]: I0321 09:15:02.981005 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae52ce4a-9d6e-4032-ad79-67343a8cd2db" (UID: "ae52ce4a-9d6e-4032-ad79-67343a8cd2db"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:15:03 crc kubenswrapper[4932]: I0321 09:15:03.073527 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7kqj\" (UniqueName: \"kubernetes.io/projected/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-kube-api-access-d7kqj\") pod \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\" (UID: \"ae52ce4a-9d6e-4032-ad79-67343a8cd2db\") " Mar 21 09:15:03 crc kubenswrapper[4932]: I0321 09:15:03.073810 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 09:15:03 crc kubenswrapper[4932]: I0321 09:15:03.073836 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 09:15:03 crc kubenswrapper[4932]: I0321 09:15:03.076563 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-kube-api-access-d7kqj" (OuterVolumeSpecName: "kube-api-access-d7kqj") pod "ae52ce4a-9d6e-4032-ad79-67343a8cd2db" (UID: "ae52ce4a-9d6e-4032-ad79-67343a8cd2db"). InnerVolumeSpecName "kube-api-access-d7kqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:15:03 crc kubenswrapper[4932]: I0321 09:15:03.174698 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7kqj\" (UniqueName: \"kubernetes.io/projected/ae52ce4a-9d6e-4032-ad79-67343a8cd2db-kube-api-access-d7kqj\") on node \"crc\" DevicePath \"\"" Mar 21 09:15:03 crc kubenswrapper[4932]: I0321 09:15:03.740340 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" event={"ID":"ae52ce4a-9d6e-4032-ad79-67343a8cd2db","Type":"ContainerDied","Data":"8b38973239cc462cde68029875ceda1b6b97b78aadc30546a1cdf68cf3733c14"} Mar 21 09:15:03 crc kubenswrapper[4932]: I0321 09:15:03.740394 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b38973239cc462cde68029875ceda1b6b97b78aadc30546a1cdf68cf3733c14" Mar 21 09:15:03 crc kubenswrapper[4932]: I0321 09:15:03.740473 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh" Mar 21 09:15:29 crc kubenswrapper[4932]: I0321 09:15:29.823748 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9twtl"] Mar 21 09:15:29 crc kubenswrapper[4932]: E0321 09:15:29.824658 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae52ce4a-9d6e-4032-ad79-67343a8cd2db" containerName="collect-profiles" Mar 21 09:15:29 crc kubenswrapper[4932]: I0321 09:15:29.824676 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae52ce4a-9d6e-4032-ad79-67343a8cd2db" containerName="collect-profiles" Mar 21 09:15:29 crc kubenswrapper[4932]: I0321 09:15:29.824857 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae52ce4a-9d6e-4032-ad79-67343a8cd2db" containerName="collect-profiles" Mar 21 09:15:29 crc kubenswrapper[4932]: I0321 09:15:29.825950 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:29 crc kubenswrapper[4932]: I0321 09:15:29.832962 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9twtl"] Mar 21 09:15:29 crc kubenswrapper[4932]: I0321 09:15:29.933681 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shc9z\" (UniqueName: \"kubernetes.io/projected/8eff25f7-98c9-4364-aceb-c5ce1578e66e-kube-api-access-shc9z\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:29 crc kubenswrapper[4932]: I0321 09:15:29.933758 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-utilities\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:29 crc kubenswrapper[4932]: I0321 09:15:29.933812 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-catalog-content\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.035034 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-catalog-content\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.035096 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shc9z\" (UniqueName: \"kubernetes.io/projected/8eff25f7-98c9-4364-aceb-c5ce1578e66e-kube-api-access-shc9z\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.035150 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-utilities\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.035638 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-catalog-content\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.036329 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-utilities\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.054514 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shc9z\" (UniqueName: \"kubernetes.io/projected/8eff25f7-98c9-4364-aceb-c5ce1578e66e-kube-api-access-shc9z\") pod \"certified-operators-9twtl\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.141893 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.225930 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.225991 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.437254 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9twtl"] Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.945340 4932 generic.go:334] "Generic (PLEG): container finished" podID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerID="2cecce4b5f5761a377c6f274580afd522cc3057cd2adfadb56e18eb03abf0e08" exitCode=0 Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.945659 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9twtl" event={"ID":"8eff25f7-98c9-4364-aceb-c5ce1578e66e","Type":"ContainerDied","Data":"2cecce4b5f5761a377c6f274580afd522cc3057cd2adfadb56e18eb03abf0e08"} Mar 21 09:15:30 crc kubenswrapper[4932]: I0321 09:15:30.945685 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9twtl" event={"ID":"8eff25f7-98c9-4364-aceb-c5ce1578e66e","Type":"ContainerStarted","Data":"126b45961543db7a55eb2adef8694d987ed15a9aa2472f0e84a3a3a89cf99ba3"} Mar 21 09:15:31 crc kubenswrapper[4932]: I0321 09:15:31.953413 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9twtl" event={"ID":"8eff25f7-98c9-4364-aceb-c5ce1578e66e","Type":"ContainerStarted","Data":"815c89554d6b04dc72b2decd86932aee03e5d5628f5b914c4fb15b873fd75936"} Mar 21 09:15:32 crc kubenswrapper[4932]: I0321 09:15:32.965597 4932 generic.go:334] "Generic (PLEG): container finished" podID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerID="815c89554d6b04dc72b2decd86932aee03e5d5628f5b914c4fb15b873fd75936" exitCode=0 Mar 21 09:15:32 crc kubenswrapper[4932]: I0321 09:15:32.965638 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9twtl" event={"ID":"8eff25f7-98c9-4364-aceb-c5ce1578e66e","Type":"ContainerDied","Data":"815c89554d6b04dc72b2decd86932aee03e5d5628f5b914c4fb15b873fd75936"} Mar 21 09:15:33 crc kubenswrapper[4932]: I0321 09:15:33.973981 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9twtl" event={"ID":"8eff25f7-98c9-4364-aceb-c5ce1578e66e","Type":"ContainerStarted","Data":"3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85"} Mar 21 09:15:33 crc kubenswrapper[4932]: I0321 09:15:33.993716 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9twtl" podStartSLOduration=2.561399812 podStartE2EDuration="4.993692955s" podCreationTimestamp="2026-03-21 09:15:29 +0000 UTC" firstStartedPulling="2026-03-21 09:15:30.947042875 +0000 UTC m=+1034.542241144" lastFinishedPulling="2026-03-21 09:15:33.379336018 +0000 UTC m=+1036.974534287" observedRunningTime="2026-03-21 09:15:33.991189903 +0000 UTC m=+1037.586388182" watchObservedRunningTime="2026-03-21 09:15:33.993692955 +0000 UTC m=+1037.588891224" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.544636 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.546025 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.549543 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ph9kf" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.559766 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.560683 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.562938 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xp5cj" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.566351 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.569211 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpwfz\" (UniqueName: \"kubernetes.io/projected/0f39b226-69b5-4dcf-b4eb-f92a2fc10261-kube-api-access-tpwfz\") pod \"barbican-operator-controller-manager-59bc569d95-4jn2b\" (UID: \"0f39b226-69b5-4dcf-b4eb-f92a2fc10261\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.569278 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vn2w\" (UniqueName: \"kubernetes.io/projected/bc611f60-9bae-4e2a-a6f9-7f88221b7464-kube-api-access-6vn2w\") pod \"cinder-operator-controller-manager-8d58dc466-b6g8p\" (UID: \"bc611f60-9bae-4e2a-a6f9-7f88221b7464\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.575177 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.585151 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.586204 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.601377 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9kqrn" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.627531 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.660426 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.661995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.667493 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-s2qkz" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.671284 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vn2w\" (UniqueName: \"kubernetes.io/projected/bc611f60-9bae-4e2a-a6f9-7f88221b7464-kube-api-access-6vn2w\") pod \"cinder-operator-controller-manager-8d58dc466-b6g8p\" (UID: \"bc611f60-9bae-4e2a-a6f9-7f88221b7464\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.671328 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t92zs\" (UniqueName: \"kubernetes.io/projected/d6b79abd-6b81-44e4-89dd-3743dd7cc389-kube-api-access-t92zs\") pod \"designate-operator-controller-manager-588d4d986b-n42t7\" (UID: \"d6b79abd-6b81-44e4-89dd-3743dd7cc389\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.671402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2fgw\" (UniqueName: \"kubernetes.io/projected/ac0ffb78-05f2-4c27-a5c3-6020714b1792-kube-api-access-p2fgw\") pod \"glance-operator-controller-manager-79df6bcc97-kbdqg\" (UID: \"ac0ffb78-05f2-4c27-a5c3-6020714b1792\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.671431 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpwfz\" (UniqueName: \"kubernetes.io/projected/0f39b226-69b5-4dcf-b4eb-f92a2fc10261-kube-api-access-tpwfz\") pod \"barbican-operator-controller-manager-59bc569d95-4jn2b\" (UID: \"0f39b226-69b5-4dcf-b4eb-f92a2fc10261\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.697809 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.698744 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.710034 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qzpd2" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.715764 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpwfz\" (UniqueName: \"kubernetes.io/projected/0f39b226-69b5-4dcf-b4eb-f92a2fc10261-kube-api-access-tpwfz\") pod \"barbican-operator-controller-manager-59bc569d95-4jn2b\" (UID: \"0f39b226-69b5-4dcf-b4eb-f92a2fc10261\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.720125 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vn2w\" (UniqueName: \"kubernetes.io/projected/bc611f60-9bae-4e2a-a6f9-7f88221b7464-kube-api-access-6vn2w\") pod \"cinder-operator-controller-manager-8d58dc466-b6g8p\" (UID: \"bc611f60-9bae-4e2a-a6f9-7f88221b7464\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.732905 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.762099 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.763103 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.770691 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qhrht" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.772259 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2fgw\" (UniqueName: \"kubernetes.io/projected/ac0ffb78-05f2-4c27-a5c3-6020714b1792-kube-api-access-p2fgw\") pod \"glance-operator-controller-manager-79df6bcc97-kbdqg\" (UID: \"ac0ffb78-05f2-4c27-a5c3-6020714b1792\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.772453 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92zs\" (UniqueName: \"kubernetes.io/projected/d6b79abd-6b81-44e4-89dd-3743dd7cc389-kube-api-access-t92zs\") pod \"designate-operator-controller-manager-588d4d986b-n42t7\" (UID: \"d6b79abd-6b81-44e4-89dd-3743dd7cc389\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.779583 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.786449 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.789468 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.791949 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.795545 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.795565 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nxzgq" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.807047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92zs\" (UniqueName: \"kubernetes.io/projected/d6b79abd-6b81-44e4-89dd-3743dd7cc389-kube-api-access-t92zs\") pod \"designate-operator-controller-manager-588d4d986b-n42t7\" (UID: \"d6b79abd-6b81-44e4-89dd-3743dd7cc389\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.811141 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.812117 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.818546 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2fgw\" (UniqueName: \"kubernetes.io/projected/ac0ffb78-05f2-4c27-a5c3-6020714b1792-kube-api-access-p2fgw\") pod \"glance-operator-controller-manager-79df6bcc97-kbdqg\" (UID: \"ac0ffb78-05f2-4c27-a5c3-6020714b1792\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.819222 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-hxb4r" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.836434 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.857705 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.865799 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.875298 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.876464 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.876487 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2d77\" (UniqueName: \"kubernetes.io/projected/548b3963-aca5-475e-b79d-7d9870d11155-kube-api-access-q2d77\") pod \"heat-operator-controller-manager-67dd5f86f5-mktkt\" (UID: \"548b3963-aca5-475e-b79d-7d9870d11155\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.881955 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-56zr2" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.884209 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.885013 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwc4d\" (UniqueName: \"kubernetes.io/projected/015c7bce-9d22-47ec-90ac-049bbba07d7e-kube-api-access-zwc4d\") pod \"horizon-operator-controller-manager-8464cc45fb-r8qnx\" (UID: \"015c7bce-9d22-47ec-90ac-049bbba07d7e\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.894224 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.895605 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.898373 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gcsnr" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.906766 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.922028 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.938003 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-xshm8"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.939150 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.942972 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pgm55" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.949185 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.961106 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.961983 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.964504 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-xshm8"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.964963 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nbznf" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.971956 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.985897 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv5zj\" (UniqueName: \"kubernetes.io/projected/a3b11074-e78d-4f10-890f-d0c9dc1b4d46-kube-api-access-jv5zj\") pod \"keystone-operator-controller-manager-768b96df4c-m25c9\" (UID: \"a3b11074-e78d-4f10-890f-d0c9dc1b4d46\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.986102 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.986257 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x8f4\" (UniqueName: \"kubernetes.io/projected/ad2f0a32-8261-4292-a063-13b65ab4ffe8-kube-api-access-4x8f4\") pod \"mariadb-operator-controller-manager-67ccfc9778-b4jmj\" (UID: \"ad2f0a32-8261-4292-a063-13b65ab4ffe8\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.986390 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2d77\" (UniqueName: \"kubernetes.io/projected/548b3963-aca5-475e-b79d-7d9870d11155-kube-api-access-q2d77\") pod \"heat-operator-controller-manager-67dd5f86f5-mktkt\" (UID: \"548b3963-aca5-475e-b79d-7d9870d11155\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.986489 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zw96\" (UniqueName: \"kubernetes.io/projected/e041327c-c039-44fc-8fa6-c7e606e1bc56-kube-api-access-4zw96\") pod \"neutron-operator-controller-manager-767865f676-xshm8\" (UID: \"e041327c-c039-44fc-8fa6-c7e606e1bc56\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.986583 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44p2\" (UniqueName: \"kubernetes.io/projected/49d38fa1-5a18-49d6-92e0-47942d410eba-kube-api-access-w44p2\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.986660 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jch7\" (UniqueName: \"kubernetes.io/projected/8356f354-c7d8-4c5c-9db0-c5e458971c5d-kube-api-access-9jch7\") pod \"ironic-operator-controller-manager-6f787dddc9-dt528\" (UID: \"8356f354-c7d8-4c5c-9db0-c5e458971c5d\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.986732 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7gk\" (UniqueName: \"kubernetes.io/projected/1e3fd98f-c5ac-4087-9a3d-a3aec1241774-kube-api-access-jd7gk\") pod \"manila-operator-controller-manager-55f864c847-gjtn7\" (UID: \"1e3fd98f-c5ac-4087-9a3d-a3aec1241774\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.986812 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwc4d\" (UniqueName: \"kubernetes.io/projected/015c7bce-9d22-47ec-90ac-049bbba07d7e-kube-api-access-zwc4d\") pod \"horizon-operator-controller-manager-8464cc45fb-r8qnx\" (UID: \"015c7bce-9d22-47ec-90ac-049bbba07d7e\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.989170 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.990497 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d"] Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.991736 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" Mar 21 09:15:37 crc kubenswrapper[4932]: I0321 09:15:37.994696 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kxx89" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.008499 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2d77\" (UniqueName: \"kubernetes.io/projected/548b3963-aca5-475e-b79d-7d9870d11155-kube-api-access-q2d77\") pod \"heat-operator-controller-manager-67dd5f86f5-mktkt\" (UID: \"548b3963-aca5-475e-b79d-7d9870d11155\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.009448 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwc4d\" (UniqueName: \"kubernetes.io/projected/015c7bce-9d22-47ec-90ac-049bbba07d7e-kube-api-access-zwc4d\") pod \"horizon-operator-controller-manager-8464cc45fb-r8qnx\" (UID: \"015c7bce-9d22-47ec-90ac-049bbba07d7e\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.012848 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.013713 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.019869 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.025806 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-pt7tc" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.027076 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.050844 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.051765 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.056730 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.056924 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-nxvzm" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.058173 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.058649 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.059760 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.077718 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-25n8x" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.087913 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ccbd\" (UniqueName: \"kubernetes.io/projected/1e153c9d-de71-4eea-9b33-4713472b3431-kube-api-access-4ccbd\") pod \"nova-operator-controller-manager-5d488d59fb-nlr7d\" (UID: \"1e153c9d-de71-4eea-9b33-4713472b3431\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.087957 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zw96\" (UniqueName: \"kubernetes.io/projected/e041327c-c039-44fc-8fa6-c7e606e1bc56-kube-api-access-4zw96\") pod \"neutron-operator-controller-manager-767865f676-xshm8\" (UID: \"e041327c-c039-44fc-8fa6-c7e606e1bc56\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.087981 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7gx\" (UniqueName: \"kubernetes.io/projected/9dc780d5-ff2a-4d92-ba79-076f72964907-kube-api-access-zl7gx\") pod \"ovn-operator-controller-manager-884679f54-5dgnz\" (UID: \"9dc780d5-ff2a-4d92-ba79-076f72964907\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088003 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44p2\" (UniqueName: \"kubernetes.io/projected/49d38fa1-5a18-49d6-92e0-47942d410eba-kube-api-access-w44p2\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088021 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfv9x\" (UniqueName: \"kubernetes.io/projected/23de4e40-dece-4cf2-a0f2-60fdcd2c7588-kube-api-access-nfv9x\") pod \"octavia-operator-controller-manager-5b9f45d989-6hl6p\" (UID: \"23de4e40-dece-4cf2-a0f2-60fdcd2c7588\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088039 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jch7\" (UniqueName: \"kubernetes.io/projected/8356f354-c7d8-4c5c-9db0-c5e458971c5d-kube-api-access-9jch7\") pod \"ironic-operator-controller-manager-6f787dddc9-dt528\" (UID: \"8356f354-c7d8-4c5c-9db0-c5e458971c5d\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088063 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd7gk\" (UniqueName: \"kubernetes.io/projected/1e3fd98f-c5ac-4087-9a3d-a3aec1241774-kube-api-access-jd7gk\") pod \"manila-operator-controller-manager-55f864c847-gjtn7\" (UID: \"1e3fd98f-c5ac-4087-9a3d-a3aec1241774\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088097 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kf85\" (UniqueName: \"kubernetes.io/projected/66456873-3ce6-4ccc-bc44-ef45d9c30821-kube-api-access-9kf85\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088121 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv5zj\" (UniqueName: \"kubernetes.io/projected/a3b11074-e78d-4f10-890f-d0c9dc1b4d46-kube-api-access-jv5zj\") pod \"keystone-operator-controller-manager-768b96df4c-m25c9\" (UID: \"a3b11074-e78d-4f10-890f-d0c9dc1b4d46\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088149 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088178 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x8f4\" (UniqueName: \"kubernetes.io/projected/ad2f0a32-8261-4292-a063-13b65ab4ffe8-kube-api-access-4x8f4\") pod \"mariadb-operator-controller-manager-67ccfc9778-b4jmj\" (UID: \"ad2f0a32-8261-4292-a063-13b65ab4ffe8\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.088200 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.091589 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.091637 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert podName:49d38fa1-5a18-49d6-92e0-47942d410eba nodeName:}" failed. No retries permitted until 2026-03-21 09:15:38.591620538 +0000 UTC m=+1042.186818807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-zf69x" (UID: "49d38fa1-5a18-49d6-92e0-47942d410eba") : secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.097692 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.110501 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.142416 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv5zj\" (UniqueName: \"kubernetes.io/projected/a3b11074-e78d-4f10-890f-d0c9dc1b4d46-kube-api-access-jv5zj\") pod \"keystone-operator-controller-manager-768b96df4c-m25c9\" (UID: \"a3b11074-e78d-4f10-890f-d0c9dc1b4d46\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.146322 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44p2\" (UniqueName: \"kubernetes.io/projected/49d38fa1-5a18-49d6-92e0-47942d410eba-kube-api-access-w44p2\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.151494 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.155034 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x8f4\" (UniqueName: \"kubernetes.io/projected/ad2f0a32-8261-4292-a063-13b65ab4ffe8-kube-api-access-4x8f4\") pod \"mariadb-operator-controller-manager-67ccfc9778-b4jmj\" (UID: \"ad2f0a32-8261-4292-a063-13b65ab4ffe8\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.165518 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.169631 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-25fsr" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.171171 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zw96\" (UniqueName: \"kubernetes.io/projected/e041327c-c039-44fc-8fa6-c7e606e1bc56-kube-api-access-4zw96\") pod \"neutron-operator-controller-manager-767865f676-xshm8\" (UID: \"e041327c-c039-44fc-8fa6-c7e606e1bc56\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.191398 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kf85\" (UniqueName: \"kubernetes.io/projected/66456873-3ce6-4ccc-bc44-ef45d9c30821-kube-api-access-9kf85\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.191486 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.191521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ccbd\" (UniqueName: \"kubernetes.io/projected/1e153c9d-de71-4eea-9b33-4713472b3431-kube-api-access-4ccbd\") pod \"nova-operator-controller-manager-5d488d59fb-nlr7d\" (UID: \"1e153c9d-de71-4eea-9b33-4713472b3431\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.191550 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl7gx\" (UniqueName: \"kubernetes.io/projected/9dc780d5-ff2a-4d92-ba79-076f72964907-kube-api-access-zl7gx\") pod \"ovn-operator-controller-manager-884679f54-5dgnz\" (UID: \"9dc780d5-ff2a-4d92-ba79-076f72964907\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.193171 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.193265 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert podName:66456873-3ce6-4ccc-bc44-ef45d9c30821 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:38.693233371 +0000 UTC m=+1042.288431640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" (UID: "66456873-3ce6-4ccc-bc44-ef45d9c30821") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.195415 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfv9x\" (UniqueName: \"kubernetes.io/projected/23de4e40-dece-4cf2-a0f2-60fdcd2c7588-kube-api-access-nfv9x\") pod \"octavia-operator-controller-manager-5b9f45d989-6hl6p\" (UID: \"23de4e40-dece-4cf2-a0f2-60fdcd2c7588\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.200657 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jch7\" (UniqueName: \"kubernetes.io/projected/8356f354-c7d8-4c5c-9db0-c5e458971c5d-kube-api-access-9jch7\") pod \"ironic-operator-controller-manager-6f787dddc9-dt528\" (UID: \"8356f354-c7d8-4c5c-9db0-c5e458971c5d\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.200849 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd7gk\" (UniqueName: \"kubernetes.io/projected/1e3fd98f-c5ac-4087-9a3d-a3aec1241774-kube-api-access-jd7gk\") pod \"manila-operator-controller-manager-55f864c847-gjtn7\" (UID: \"1e3fd98f-c5ac-4087-9a3d-a3aec1241774\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.225551 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.234050 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-w84fw"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.234994 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.235773 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.238229 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-w5j79" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.239282 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kf85\" (UniqueName: \"kubernetes.io/projected/66456873-3ce6-4ccc-bc44-ef45d9c30821-kube-api-access-9kf85\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.239950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ccbd\" (UniqueName: \"kubernetes.io/projected/1e153c9d-de71-4eea-9b33-4713472b3431-kube-api-access-4ccbd\") pod \"nova-operator-controller-manager-5d488d59fb-nlr7d\" (UID: \"1e153c9d-de71-4eea-9b33-4713472b3431\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.242641 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl7gx\" (UniqueName: \"kubernetes.io/projected/9dc780d5-ff2a-4d92-ba79-076f72964907-kube-api-access-zl7gx\") pod \"ovn-operator-controller-manager-884679f54-5dgnz\" (UID: \"9dc780d5-ff2a-4d92-ba79-076f72964907\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.243699 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-w84fw"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.248352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfv9x\" (UniqueName: \"kubernetes.io/projected/23de4e40-dece-4cf2-a0f2-60fdcd2c7588-kube-api-access-nfv9x\") pod \"octavia-operator-controller-manager-5b9f45d989-6hl6p\" (UID: \"23de4e40-dece-4cf2-a0f2-60fdcd2c7588\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.261435 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.266494 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.277875 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.290368 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.301025 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2kx9\" (UniqueName: \"kubernetes.io/projected/137f9007-91ad-4db2-bd97-4fdb99ba1ecb-kube-api-access-q2kx9\") pod \"swift-operator-controller-manager-c674c5965-w84fw\" (UID: \"137f9007-91ad-4db2-bd97-4fdb99ba1ecb\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.301138 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5js2\" (UniqueName: \"kubernetes.io/projected/15c187a9-b91f-4418-a63e-20fd4f52de2f-kube-api-access-n5js2\") pod \"placement-operator-controller-manager-5784578c99-gh8vr\" (UID: \"15c187a9-b91f-4418-a63e-20fd4f52de2f\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.314483 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.314883 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.315696 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.320952 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-j5wt2" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.327566 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.350511 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.403075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5js2\" (UniqueName: \"kubernetes.io/projected/15c187a9-b91f-4418-a63e-20fd4f52de2f-kube-api-access-n5js2\") pod \"placement-operator-controller-manager-5784578c99-gh8vr\" (UID: \"15c187a9-b91f-4418-a63e-20fd4f52de2f\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.403161 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2kx9\" (UniqueName: \"kubernetes.io/projected/137f9007-91ad-4db2-bd97-4fdb99ba1ecb-kube-api-access-q2kx9\") pod \"swift-operator-controller-manager-c674c5965-w84fw\" (UID: \"137f9007-91ad-4db2-bd97-4fdb99ba1ecb\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.439532 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5js2\" (UniqueName: \"kubernetes.io/projected/15c187a9-b91f-4418-a63e-20fd4f52de2f-kube-api-access-n5js2\") pod \"placement-operator-controller-manager-5784578c99-gh8vr\" (UID: \"15c187a9-b91f-4418-a63e-20fd4f52de2f\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.440435 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.441659 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.454461 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2kx9\" (UniqueName: \"kubernetes.io/projected/137f9007-91ad-4db2-bd97-4fdb99ba1ecb-kube-api-access-q2kx9\") pod \"swift-operator-controller-manager-c674c5965-w84fw\" (UID: \"137f9007-91ad-4db2-bd97-4fdb99ba1ecb\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.456701 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6rvq4" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.469682 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.470233 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.507112 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpsnd\" (UniqueName: \"kubernetes.io/projected/83f39e87-13b5-4282-8d4f-820f1f80931b-kube-api-access-gpsnd\") pod \"test-operator-controller-manager-5c5cb9c4d7-dbgzf\" (UID: \"83f39e87-13b5-4282-8d4f-820f1f80931b\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.507194 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbdb\" (UniqueName: \"kubernetes.io/projected/e5984b5c-66b6-4141-ae47-b0b92f487355-kube-api-access-fdbdb\") pod \"telemetry-operator-controller-manager-d6b694c5-2qld2\" (UID: \"e5984b5c-66b6-4141-ae47-b0b92f487355\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.508960 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.555895 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.556784 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.559729 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-p28xx" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.571365 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.580811 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.593203 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.608188 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpsnd\" (UniqueName: \"kubernetes.io/projected/83f39e87-13b5-4282-8d4f-820f1f80931b-kube-api-access-gpsnd\") pod \"test-operator-controller-manager-5c5cb9c4d7-dbgzf\" (UID: \"83f39e87-13b5-4282-8d4f-820f1f80931b\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.608244 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbdb\" (UniqueName: \"kubernetes.io/projected/e5984b5c-66b6-4141-ae47-b0b92f487355-kube-api-access-fdbdb\") pod \"telemetry-operator-controller-manager-d6b694c5-2qld2\" (UID: \"e5984b5c-66b6-4141-ae47-b0b92f487355\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.608268 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kct6w\" (UniqueName: \"kubernetes.io/projected/49da73f0-68f0-4d58-954a-f0c7132f3e9f-kube-api-access-kct6w\") pod \"watcher-operator-controller-manager-7d7cfb649d-wl9g6\" (UID: \"49da73f0-68f0-4d58-954a-f0c7132f3e9f\") " pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.608304 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.614592 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.614667 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert podName:49d38fa1-5a18-49d6-92e0-47942d410eba nodeName:}" failed. No retries permitted until 2026-03-21 09:15:39.614650684 +0000 UTC m=+1043.209848953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-zf69x" (UID: "49d38fa1-5a18-49d6-92e0-47942d410eba") : secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.636747 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.637839 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.644052 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.644236 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qr2n4" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.644237 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbdb\" (UniqueName: \"kubernetes.io/projected/e5984b5c-66b6-4141-ae47-b0b92f487355-kube-api-access-fdbdb\") pod \"telemetry-operator-controller-manager-d6b694c5-2qld2\" (UID: \"e5984b5c-66b6-4141-ae47-b0b92f487355\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.644400 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.647527 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpsnd\" (UniqueName: \"kubernetes.io/projected/83f39e87-13b5-4282-8d4f-820f1f80931b-kube-api-access-gpsnd\") pod \"test-operator-controller-manager-5c5cb9c4d7-dbgzf\" (UID: \"83f39e87-13b5-4282-8d4f-820f1f80931b\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.664480 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.671964 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.672911 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.677843 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-248tk" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.680320 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.686565 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.718132 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kct6w\" (UniqueName: \"kubernetes.io/projected/49da73f0-68f0-4d58-954a-f0c7132f3e9f-kube-api-access-kct6w\") pod \"watcher-operator-controller-manager-7d7cfb649d-wl9g6\" (UID: \"49da73f0-68f0-4d58-954a-f0c7132f3e9f\") " pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.718219 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.718370 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.718419 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert podName:66456873-3ce6-4ccc-bc44-ef45d9c30821 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:39.718404269 +0000 UTC m=+1043.313602538 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" (UID: "66456873-3ce6-4ccc-bc44-ef45d9c30821") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.742165 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7"] Mar 21 09:15:38 crc kubenswrapper[4932]: W0321 09:15:38.763208 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f39b226_69b5_4dcf_b4eb_f92a2fc10261.slice/crio-2eb8530d0311e3365f92d56d7038c27978bcffbdf069d0a19973fc2f7892133c WatchSource:0}: Error finding container 2eb8530d0311e3365f92d56d7038c27978bcffbdf069d0a19973fc2f7892133c: Status 404 returned error can't find the container with id 2eb8530d0311e3365f92d56d7038c27978bcffbdf069d0a19973fc2f7892133c Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.765521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kct6w\" (UniqueName: \"kubernetes.io/projected/49da73f0-68f0-4d58-954a-f0c7132f3e9f-kube-api-access-kct6w\") pod \"watcher-operator-controller-manager-7d7cfb649d-wl9g6\" (UID: \"49da73f0-68f0-4d58-954a-f0c7132f3e9f\") " pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.774630 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.812693 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.828991 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmr5\" (UniqueName: \"kubernetes.io/projected/598a1e12-b105-41b7-93b5-123bd4f38dd9-kube-api-access-bpmr5\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.829076 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.829148 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.829164 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzf6s\" (UniqueName: \"kubernetes.io/projected/70cf1e28-3bbf-4c35-a013-e1a5bfab962f-kube-api-access-zzf6s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tlpd9\" (UID: \"70cf1e28-3bbf-4c35-a013-e1a5bfab962f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.883809 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.893637 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg"] Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.930178 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.930221 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzf6s\" (UniqueName: \"kubernetes.io/projected/70cf1e28-3bbf-4c35-a013-e1a5bfab962f-kube-api-access-zzf6s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tlpd9\" (UID: \"70cf1e28-3bbf-4c35-a013-e1a5bfab962f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.930264 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmr5\" (UniqueName: \"kubernetes.io/projected/598a1e12-b105-41b7-93b5-123bd4f38dd9-kube-api-access-bpmr5\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.930306 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.930461 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.930509 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:39.430494211 +0000 UTC m=+1043.025692480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "webhook-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.931131 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: E0321 09:15:38.931164 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:39.4311549 +0000 UTC m=+1043.026353159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "metrics-server-cert" not found Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.938094 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.962333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzf6s\" (UniqueName: \"kubernetes.io/projected/70cf1e28-3bbf-4c35-a013-e1a5bfab962f-kube-api-access-zzf6s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tlpd9\" (UID: \"70cf1e28-3bbf-4c35-a013-e1a5bfab962f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" Mar 21 09:15:38 crc kubenswrapper[4932]: I0321 09:15:38.962588 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmr5\" (UniqueName: \"kubernetes.io/projected/598a1e12-b105-41b7-93b5-123bd4f38dd9-kube-api-access-bpmr5\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.043524 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" event={"ID":"ac0ffb78-05f2-4c27-a5c3-6020714b1792","Type":"ContainerStarted","Data":"134daa69fe38a9af8e69a8de9f904ffd833d73c3b203a712fde2a6efc5deb45e"} Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.045480 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" event={"ID":"bc611f60-9bae-4e2a-a6f9-7f88221b7464","Type":"ContainerStarted","Data":"e11f2f2cb3fbb8abba674e0d23f53af073934c728dfdf562d7d1925a56369212"} Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.046735 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" event={"ID":"d6b79abd-6b81-44e4-89dd-3743dd7cc389","Type":"ContainerStarted","Data":"da389b71f35e078d249845a8e3564089c405aed039d8ea3e55943b5a29a95a6b"} Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.049553 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" event={"ID":"0f39b226-69b5-4dcf-b4eb-f92a2fc10261","Type":"ContainerStarted","Data":"2eb8530d0311e3365f92d56d7038c27978bcffbdf069d0a19973fc2f7892133c"} Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.054767 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.080521 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.445918 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.446010 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.446135 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.446523 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.446555 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:40.446541727 +0000 UTC m=+1044.041739996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "metrics-server-cert" not found Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.446572 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:40.446565508 +0000 UTC m=+1044.041763767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "webhook-server-cert" not found Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.519897 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.530360 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.540687 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.649102 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.649247 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.649305 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert podName:49d38fa1-5a18-49d6-92e0-47942d410eba nodeName:}" failed. No retries permitted until 2026-03-21 09:15:41.64928959 +0000 UTC m=+1045.244487859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-zf69x" (UID: "49d38fa1-5a18-49d6-92e0-47942d410eba") : secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.750862 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.751601 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.751666 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert podName:66456873-3ce6-4ccc-bc44-ef45d9c30821 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:41.751645704 +0000 UTC m=+1045.346843983 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" (UID: "66456873-3ce6-4ccc-bc44-ef45d9c30821") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.824460 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.831662 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.836584 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.871297 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.877727 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-xshm8"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.882920 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.893835 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d"] Mar 21 09:15:39 crc kubenswrapper[4932]: W0321 09:15:39.893830 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5984b5c_66b6_4141_ae47_b0b92f487355.slice/crio-85c9b741897e9361551e8d3da04f269d1045061d56f679189ab19781633f3f4d WatchSource:0}: Error finding container 85c9b741897e9361551e8d3da04f269d1045061d56f679189ab19781633f3f4d: Status 404 returned error can't find the container with id 85c9b741897e9361551e8d3da04f269d1045061d56f679189ab19781633f3f4d Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.900510 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.906365 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-w84fw"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.912619 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528"] Mar 21 09:15:39 crc kubenswrapper[4932]: I0321 09:15:39.921794 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6"] Mar 21 09:15:39 crc kubenswrapper[4932]: W0321 09:15:39.954944 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8356f354_c7d8_4c5c_9db0_c5e458971c5d.slice/crio-aec553d8e147ddbf2358b1fafbf8f0da9ee8a788d5fcde0508637bcf2976f70c WatchSource:0}: Error finding container aec553d8e147ddbf2358b1fafbf8f0da9ee8a788d5fcde0508637bcf2976f70c: Status 404 returned error can't find the container with id aec553d8e147ddbf2358b1fafbf8f0da9ee8a788d5fcde0508637bcf2976f70c Mar 21 09:15:39 crc kubenswrapper[4932]: W0321 09:15:39.958853 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49da73f0_68f0_4d58_954a_f0c7132f3e9f.slice/crio-6a418c84f245e1b24677d39fd85381b6304202ef144c8c9f301b4f9b5f425e83 WatchSource:0}: Error finding container 6a418c84f245e1b24677d39fd85381b6304202ef144c8c9f301b4f9b5f425e83: Status 404 returned error can't find the container with id 6a418c84f245e1b24677d39fd85381b6304202ef144c8c9f301b4f9b5f425e83 Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.964931 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzf6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tlpd9_openstack-operators(70cf1e28-3bbf-4c35-a013-e1a5bfab962f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.966096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" podUID="70cf1e28-3bbf-4c35-a013-e1a5bfab962f" Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.974577 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q2kx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-c674c5965-w84fw_openstack-operators(137f9007-91ad-4db2-bd97-4fdb99ba1ecb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.975732 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" podUID="137f9007-91ad-4db2-bd97-4fdb99ba1ecb" Mar 21 09:15:39 crc kubenswrapper[4932]: W0321 09:15:39.984507 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e153c9d_de71_4eea_9b33_4713472b3431.slice/crio-5a0605f0dafe8b247df633bc3d64e59ae2f33534cf32f240d39aa245d8b7416e WatchSource:0}: Error finding container 5a0605f0dafe8b247df633bc3d64e59ae2f33534cf32f240d39aa245d8b7416e: Status 404 returned error can't find the container with id 5a0605f0dafe8b247df633bc3d64e59ae2f33534cf32f240d39aa245d8b7416e Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.992954 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4ccbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5d488d59fb-nlr7d_openstack-operators(1e153c9d-de71-4eea-9b33-4713472b3431): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 21 09:15:39 crc kubenswrapper[4932]: E0321 09:15:39.994716 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" podUID="1e153c9d-de71-4eea-9b33-4713472b3431" Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.040731 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf"] Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.062126 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" event={"ID":"1e153c9d-de71-4eea-9b33-4713472b3431","Type":"ContainerStarted","Data":"5a0605f0dafe8b247df633bc3d64e59ae2f33534cf32f240d39aa245d8b7416e"} Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.063507 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" podUID="1e153c9d-de71-4eea-9b33-4713472b3431" Mar 21 09:15:40 crc kubenswrapper[4932]: W0321 09:15:40.064178 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f39e87_13b5_4282_8d4f_820f1f80931b.slice/crio-240c29deb98520b91c04e9856a1db11bf0aef3f0e3ebb8b8ff732fc6a1f1952c WatchSource:0}: Error finding container 240c29deb98520b91c04e9856a1db11bf0aef3f0e3ebb8b8ff732fc6a1f1952c: Status 404 returned error can't find the container with id 240c29deb98520b91c04e9856a1db11bf0aef3f0e3ebb8b8ff732fc6a1f1952c Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.068201 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" event={"ID":"e041327c-c039-44fc-8fa6-c7e606e1bc56","Type":"ContainerStarted","Data":"9e31deeba279bc04fdcc7d88af245eb696da12747ea761a3abf962842ef828b3"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.070809 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" event={"ID":"548b3963-aca5-475e-b79d-7d9870d11155","Type":"ContainerStarted","Data":"d636534b88bf2bdb83b9d87335c31896a24315a36c1ed0c928738084b5f1708e"} Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.072304 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gpsnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-dbgzf_openstack-operators(83f39e87-13b5-4282-8d4f-820f1f80931b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.073686 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" podUID="83f39e87-13b5-4282-8d4f-820f1f80931b" Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.078757 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" event={"ID":"015c7bce-9d22-47ec-90ac-049bbba07d7e","Type":"ContainerStarted","Data":"1372e28091c06302acb8d1d3b010df5bac1be8108277efaa17b93c686b10c12f"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.083339 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" event={"ID":"49da73f0-68f0-4d58-954a-f0c7132f3e9f","Type":"ContainerStarted","Data":"6a418c84f245e1b24677d39fd85381b6304202ef144c8c9f301b4f9b5f425e83"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.086371 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" event={"ID":"137f9007-91ad-4db2-bd97-4fdb99ba1ecb","Type":"ContainerStarted","Data":"5c0a223cf6047e38c3d6bf2af5f6cb75f2efa2d0a1189dd6eb14e92ce6abc961"} Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.088843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" podUID="137f9007-91ad-4db2-bd97-4fdb99ba1ecb" Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.092202 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" event={"ID":"ad2f0a32-8261-4292-a063-13b65ab4ffe8","Type":"ContainerStarted","Data":"bbabc63d9a88f1a1d6867a75b36ae562f57c70fbecebb3ce60946dcd4c33dfb5"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.095503 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" event={"ID":"9dc780d5-ff2a-4d92-ba79-076f72964907","Type":"ContainerStarted","Data":"d23bdecd6365b0f9a34b2d76a12ef596d36c86bbb8e634a25a1446875e149009"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.097236 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" event={"ID":"1e3fd98f-c5ac-4087-9a3d-a3aec1241774","Type":"ContainerStarted","Data":"719bb6fc9224019c41c97e306168c5d904ce6a051609506fbf5306185e2fd9e5"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.102598 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" event={"ID":"8356f354-c7d8-4c5c-9db0-c5e458971c5d","Type":"ContainerStarted","Data":"aec553d8e147ddbf2358b1fafbf8f0da9ee8a788d5fcde0508637bcf2976f70c"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.112490 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" event={"ID":"e5984b5c-66b6-4141-ae47-b0b92f487355","Type":"ContainerStarted","Data":"85c9b741897e9361551e8d3da04f269d1045061d56f679189ab19781633f3f4d"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.115533 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" event={"ID":"15c187a9-b91f-4418-a63e-20fd4f52de2f","Type":"ContainerStarted","Data":"0833d70cb6d9c01544328bd2777eb6503fd9fd5eeb7ee35aead5aa1119457aa2"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.116850 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" event={"ID":"a3b11074-e78d-4f10-890f-d0c9dc1b4d46","Type":"ContainerStarted","Data":"f4638d95bf266acf9708c729f0eb146a8d6ac5d4d0687580a659b7bc7df286b1"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.121389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" event={"ID":"23de4e40-dece-4cf2-a0f2-60fdcd2c7588","Type":"ContainerStarted","Data":"cc90560766c5c4c0016a7450935698940aff9663a362832cb85c25a446587fcf"} Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.126292 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" event={"ID":"70cf1e28-3bbf-4c35-a013-e1a5bfab962f","Type":"ContainerStarted","Data":"66a25c22e622158d714ef523880b0f9dd8a7067cfc39302a6db79cd6b2adbe61"} Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.129111 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" podUID="70cf1e28-3bbf-4c35-a013-e1a5bfab962f" Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.142695 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.142751 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.208786 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.490164 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:40 crc kubenswrapper[4932]: I0321 09:15:40.490276 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.490473 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.490525 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:42.49050787 +0000 UTC m=+1046.085706139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "webhook-server-cert" not found Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.490676 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 21 09:15:40 crc kubenswrapper[4932]: E0321 09:15:40.490750 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:42.490733667 +0000 UTC m=+1046.085931926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "metrics-server-cert" not found Mar 21 09:15:41 crc kubenswrapper[4932]: I0321 09:15:41.171015 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" event={"ID":"83f39e87-13b5-4282-8d4f-820f1f80931b","Type":"ContainerStarted","Data":"240c29deb98520b91c04e9856a1db11bf0aef3f0e3ebb8b8ff732fc6a1f1952c"} Mar 21 09:15:41 crc kubenswrapper[4932]: E0321 09:15:41.174930 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" podUID="137f9007-91ad-4db2-bd97-4fdb99ba1ecb" Mar 21 09:15:41 crc kubenswrapper[4932]: E0321 09:15:41.175339 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" podUID="83f39e87-13b5-4282-8d4f-820f1f80931b" Mar 21 09:15:41 crc kubenswrapper[4932]: E0321 09:15:41.175432 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" podUID="1e153c9d-de71-4eea-9b33-4713472b3431" Mar 21 09:15:41 crc kubenswrapper[4932]: E0321 09:15:41.179000 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" podUID="70cf1e28-3bbf-4c35-a013-e1a5bfab962f" Mar 21 09:15:41 crc kubenswrapper[4932]: I0321 09:15:41.255291 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:15:41 crc kubenswrapper[4932]: I0321 09:15:41.321295 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9twtl"] Mar 21 09:15:41 crc kubenswrapper[4932]: E0321 09:15:41.722374 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:41 crc kubenswrapper[4932]: E0321 09:15:41.722497 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert podName:49d38fa1-5a18-49d6-92e0-47942d410eba nodeName:}" failed. No retries permitted until 2026-03-21 09:15:45.722476163 +0000 UTC m=+1049.317674432 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-zf69x" (UID: "49d38fa1-5a18-49d6-92e0-47942d410eba") : secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:41 crc kubenswrapper[4932]: I0321 09:15:41.722917 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:41 crc kubenswrapper[4932]: I0321 09:15:41.825189 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:41 crc kubenswrapper[4932]: E0321 09:15:41.825503 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:41 crc kubenswrapper[4932]: E0321 09:15:41.825657 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert podName:66456873-3ce6-4ccc-bc44-ef45d9c30821 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:45.82561777 +0000 UTC m=+1049.420816039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" (UID: "66456873-3ce6-4ccc-bc44-ef45d9c30821") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:42 crc kubenswrapper[4932]: E0321 09:15:42.186859 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" podUID="83f39e87-13b5-4282-8d4f-820f1f80931b" Mar 21 09:15:42 crc kubenswrapper[4932]: I0321 09:15:42.537386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:42 crc kubenswrapper[4932]: I0321 09:15:42.537476 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:42 crc kubenswrapper[4932]: E0321 09:15:42.537589 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 21 09:15:42 crc kubenswrapper[4932]: E0321 09:15:42.537627 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 21 09:15:42 crc kubenswrapper[4932]: E0321 09:15:42.537678 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:46.537659115 +0000 UTC m=+1050.132857384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "metrics-server-cert" not found Mar 21 09:15:42 crc kubenswrapper[4932]: E0321 09:15:42.537704 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:46.537687796 +0000 UTC m=+1050.132886065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "webhook-server-cert" not found Mar 21 09:15:43 crc kubenswrapper[4932]: I0321 09:15:43.189862 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9twtl" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="registry-server" containerID="cri-o://3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" gracePeriod=2 Mar 21 09:15:44 crc kubenswrapper[4932]: I0321 09:15:44.200839 4932 generic.go:334] "Generic (PLEG): container finished" podID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerID="3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" exitCode=0 Mar 21 09:15:44 crc kubenswrapper[4932]: I0321 09:15:44.200885 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9twtl" event={"ID":"8eff25f7-98c9-4364-aceb-c5ce1578e66e","Type":"ContainerDied","Data":"3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85"} Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.556431 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-snd5x"] Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.558374 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.569533 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snd5x"] Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.693509 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-catalog-content\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.693840 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f52v\" (UniqueName: \"kubernetes.io/projected/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-kube-api-access-6f52v\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.694002 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-utilities\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.795145 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-catalog-content\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.795209 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f52v\" (UniqueName: \"kubernetes.io/projected/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-kube-api-access-6f52v\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.795269 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.795364 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-utilities\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: E0321 09:15:45.795493 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:45 crc kubenswrapper[4932]: E0321 09:15:45.795564 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert podName:49d38fa1-5a18-49d6-92e0-47942d410eba nodeName:}" failed. No retries permitted until 2026-03-21 09:15:53.7955457 +0000 UTC m=+1057.390743969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-zf69x" (UID: "49d38fa1-5a18-49d6-92e0-47942d410eba") : secret "infra-operator-webhook-server-cert" not found Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.795865 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-catalog-content\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.795939 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-utilities\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.820245 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f52v\" (UniqueName: \"kubernetes.io/projected/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-kube-api-access-6f52v\") pod \"redhat-marketplace-snd5x\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.874229 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:15:45 crc kubenswrapper[4932]: I0321 09:15:45.896454 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:45 crc kubenswrapper[4932]: E0321 09:15:45.896661 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:45 crc kubenswrapper[4932]: E0321 09:15:45.896747 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert podName:66456873-3ce6-4ccc-bc44-ef45d9c30821 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:53.896728501 +0000 UTC m=+1057.491926770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" (UID: "66456873-3ce6-4ccc-bc44-ef45d9c30821") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 21 09:15:46 crc kubenswrapper[4932]: I0321 09:15:46.604552 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:46 crc kubenswrapper[4932]: I0321 09:15:46.604680 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:46 crc kubenswrapper[4932]: E0321 09:15:46.604707 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 21 09:15:46 crc kubenswrapper[4932]: E0321 09:15:46.604771 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:54.604752839 +0000 UTC m=+1058.199951108 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "metrics-server-cert" not found Mar 21 09:15:46 crc kubenswrapper[4932]: E0321 09:15:46.604820 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 21 09:15:46 crc kubenswrapper[4932]: E0321 09:15:46.604909 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:15:54.604889443 +0000 UTC m=+1058.200087702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "webhook-server-cert" not found Mar 21 09:15:50 crc kubenswrapper[4932]: E0321 09:15:50.144028 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85 is running failed: container process not found" containerID="3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" cmd=["grpc_health_probe","-addr=:50051"] Mar 21 09:15:50 crc kubenswrapper[4932]: E0321 09:15:50.144925 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85 is running failed: container process not found" containerID="3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" cmd=["grpc_health_probe","-addr=:50051"] Mar 21 09:15:50 crc kubenswrapper[4932]: E0321 09:15:50.145376 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85 is running failed: container process not found" containerID="3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" cmd=["grpc_health_probe","-addr=:50051"] Mar 21 09:15:50 crc kubenswrapper[4932]: E0321 09:15:50.145403 4932 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-9twtl" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="registry-server" Mar 21 09:15:53 crc kubenswrapper[4932]: I0321 09:15:53.814898 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:53 crc kubenswrapper[4932]: I0321 09:15:53.821887 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49d38fa1-5a18-49d6-92e0-47942d410eba-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-zf69x\" (UID: \"49d38fa1-5a18-49d6-92e0-47942d410eba\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:53 crc kubenswrapper[4932]: I0321 09:15:53.917008 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:53 crc kubenswrapper[4932]: I0321 09:15:53.921977 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66456873-3ce6-4ccc-bc44-ef45d9c30821-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-xg5gn\" (UID: \"66456873-3ce6-4ccc-bc44-ef45d9c30821\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:54 crc kubenswrapper[4932]: I0321 09:15:54.032025 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:15:54 crc kubenswrapper[4932]: I0321 09:15:54.056826 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:15:54 crc kubenswrapper[4932]: E0321 09:15:54.244420 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" Mar 21 09:15:54 crc kubenswrapper[4932]: E0321 09:15:54.244614 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jv5zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-768b96df4c-m25c9_openstack-operators(a3b11074-e78d-4f10-890f-d0c9dc1b4d46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:15:54 crc kubenswrapper[4932]: E0321 09:15:54.245777 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" podUID="a3b11074-e78d-4f10-890f-d0c9dc1b4d46" Mar 21 09:15:54 crc kubenswrapper[4932]: E0321 09:15:54.305553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" podUID="a3b11074-e78d-4f10-890f-d0c9dc1b4d46" Mar 21 09:15:54 crc kubenswrapper[4932]: I0321 09:15:54.632037 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:54 crc kubenswrapper[4932]: E0321 09:15:54.632243 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 21 09:15:54 crc kubenswrapper[4932]: E0321 09:15:54.632564 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs podName:598a1e12-b105-41b7-93b5-123bd4f38dd9 nodeName:}" failed. No retries permitted until 2026-03-21 09:16:10.63254484 +0000 UTC m=+1074.227743109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs") pod "openstack-operator-controller-manager-5c7b6d4df4-285rf" (UID: "598a1e12-b105-41b7-93b5-123bd4f38dd9") : secret "webhook-server-cert" not found Mar 21 09:15:54 crc kubenswrapper[4932]: I0321 09:15:54.632507 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:54 crc kubenswrapper[4932]: I0321 09:15:54.637358 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-metrics-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:15:56 crc kubenswrapper[4932]: E0321 09:15:56.310910 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" Mar 21 09:15:56 crc kubenswrapper[4932]: E0321 09:15:56.311430 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nfv9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5b9f45d989-6hl6p_openstack-operators(23de4e40-dece-4cf2-a0f2-60fdcd2c7588): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:15:56 crc kubenswrapper[4932]: E0321 09:15:56.312841 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" podUID="23de4e40-dece-4cf2-a0f2-60fdcd2c7588" Mar 21 09:15:57 crc kubenswrapper[4932]: E0321 09:15:57.034530 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" Mar 21 09:15:57 crc kubenswrapper[4932]: E0321 09:15:57.034705 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vn2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d58dc466-b6g8p_openstack-operators(bc611f60-9bae-4e2a-a6f9-7f88221b7464): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:15:57 crc kubenswrapper[4932]: E0321 09:15:57.037443 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" podUID="bc611f60-9bae-4e2a-a6f9-7f88221b7464" Mar 21 09:15:57 crc kubenswrapper[4932]: E0321 09:15:57.309397 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" podUID="bc611f60-9bae-4e2a-a6f9-7f88221b7464" Mar 21 09:15:57 crc kubenswrapper[4932]: E0321 09:15:57.310164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" podUID="23de4e40-dece-4cf2-a0f2-60fdcd2c7588" Mar 21 09:15:58 crc kubenswrapper[4932]: E0321 09:15:58.827210 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" Mar 21 09:15:58 crc kubenswrapper[4932]: E0321 09:15:58.828026 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zwc4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-8464cc45fb-r8qnx_openstack-operators(015c7bce-9d22-47ec-90ac-049bbba07d7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:15:58 crc kubenswrapper[4932]: E0321 09:15:58.829412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" podUID="015c7bce-9d22-47ec-90ac-049bbba07d7e" Mar 21 09:15:59 crc kubenswrapper[4932]: E0321 09:15:59.324849 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" podUID="015c7bce-9d22-47ec-90ac-049bbba07d7e" Mar 21 09:15:59 crc kubenswrapper[4932]: E0321 09:15:59.404360 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" Mar 21 09:15:59 crc kubenswrapper[4932]: E0321 09:15:59.404652 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q2d77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-67dd5f86f5-mktkt_openstack-operators(548b3963-aca5-475e-b79d-7d9870d11155): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:15:59 crc kubenswrapper[4932]: E0321 09:15:59.405822 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" podUID="548b3963-aca5-475e-b79d-7d9870d11155" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.137887 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568076-xwz4c"] Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.139516 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.142161 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.142337 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.144444 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.144931 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85 is running failed: container process not found" containerID="3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" cmd=["grpc_health_probe","-addr=:50051"] Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.144979 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568076-xwz4c"] Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.145663 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85 is running failed: container process not found" containerID="3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" cmd=["grpc_health_probe","-addr=:50051"] Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.146226 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85 is running failed: container process not found" containerID="3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" cmd=["grpc_health_probe","-addr=:50051"] Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.146283 4932 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-9twtl" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="registry-server" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.186357 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.186591 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zl7gx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-884679f54-5dgnz_openstack-operators(9dc780d5-ff2a-4d92-ba79-076f72964907): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.187895 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" podUID="9dc780d5-ff2a-4d92-ba79-076f72964907" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.228804 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.228866 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.228913 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.229637 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91e3049bee861f37df7d289eee8d3ed5ac012bf0c17d73d1859a4aa9a278e9f2"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.229704 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://91e3049bee861f37df7d289eee8d3ed5ac012bf0c17d73d1859a4aa9a278e9f2" gracePeriod=600 Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.332062 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" podUID="9dc780d5-ff2a-4d92-ba79-076f72964907" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.332736 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2wnr\" (UniqueName: \"kubernetes.io/projected/dd38650a-3a05-4fb1-bb79-3641a7f91024-kube-api-access-b2wnr\") pod \"auto-csr-approver-29568076-xwz4c\" (UID: \"dd38650a-3a05-4fb1-bb79-3641a7f91024\") " pod="openshift-infra/auto-csr-approver-29568076-xwz4c" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.334439 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900\\\"\"" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" podUID="548b3963-aca5-475e-b79d-7d9870d11155" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.434565 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2wnr\" (UniqueName: \"kubernetes.io/projected/dd38650a-3a05-4fb1-bb79-3641a7f91024-kube-api-access-b2wnr\") pod \"auto-csr-approver-29568076-xwz4c\" (UID: \"dd38650a-3a05-4fb1-bb79-3641a7f91024\") " pod="openshift-infra/auto-csr-approver-29568076-xwz4c" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.455588 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2wnr\" (UniqueName: \"kubernetes.io/projected/dd38650a-3a05-4fb1-bb79-3641a7f91024-kube-api-access-b2wnr\") pod \"auto-csr-approver-29568076-xwz4c\" (UID: \"dd38650a-3a05-4fb1-bb79-3641a7f91024\") " pod="openshift-infra/auto-csr-approver-29568076-xwz4c" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.461641 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.752431 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.752925 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jd7gk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-55f864c847-gjtn7_openstack-operators(1e3fd98f-c5ac-4087-9a3d-a3aec1241774): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.754262 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" podUID="1e3fd98f-c5ac-4087-9a3d-a3aec1241774" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.814633 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.943100 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/openstack-k8s-operators/watcher-operator:3300cf73e2d3248addef0ca11278d270abd051a5" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.943160 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/openstack-k8s-operators/watcher-operator:3300cf73e2d3248addef0ca11278d270abd051a5" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.943318 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.159:5001/openstack-k8s-operators/watcher-operator:3300cf73e2d3248addef0ca11278d270abd051a5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kct6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7d7cfb649d-wl9g6_openstack-operators(49da73f0-68f0-4d58-954a-f0c7132f3e9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:16:00 crc kubenswrapper[4932]: E0321 09:16:00.944762 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" podUID="49da73f0-68f0-4d58-954a-f0c7132f3e9f" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.945450 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-utilities\") pod \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.945494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-catalog-content\") pod \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.945517 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shc9z\" (UniqueName: \"kubernetes.io/projected/8eff25f7-98c9-4364-aceb-c5ce1578e66e-kube-api-access-shc9z\") pod \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\" (UID: \"8eff25f7-98c9-4364-aceb-c5ce1578e66e\") " Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.946473 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-utilities" (OuterVolumeSpecName: "utilities") pod "8eff25f7-98c9-4364-aceb-c5ce1578e66e" (UID: "8eff25f7-98c9-4364-aceb-c5ce1578e66e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:16:00 crc kubenswrapper[4932]: I0321 09:16:00.959712 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eff25f7-98c9-4364-aceb-c5ce1578e66e-kube-api-access-shc9z" (OuterVolumeSpecName: "kube-api-access-shc9z") pod "8eff25f7-98c9-4364-aceb-c5ce1578e66e" (UID: "8eff25f7-98c9-4364-aceb-c5ce1578e66e"). InnerVolumeSpecName "kube-api-access-shc9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.001116 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eff25f7-98c9-4364-aceb-c5ce1578e66e" (UID: "8eff25f7-98c9-4364-aceb-c5ce1578e66e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.047254 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.047297 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eff25f7-98c9-4364-aceb-c5ce1578e66e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.047316 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shc9z\" (UniqueName: \"kubernetes.io/projected/8eff25f7-98c9-4364-aceb-c5ce1578e66e-kube-api-access-shc9z\") on node \"crc\" DevicePath \"\"" Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.343320 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9twtl" event={"ID":"8eff25f7-98c9-4364-aceb-c5ce1578e66e","Type":"ContainerDied","Data":"126b45961543db7a55eb2adef8694d987ed15a9aa2472f0e84a3a3a89cf99ba3"} Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.343415 4932 scope.go:117] "RemoveContainer" containerID="3d801e6211a7754789885d1ba4b6be684ee52f29a0931bf097da148d9ce17a85" Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.343556 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9twtl" Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.356241 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="91e3049bee861f37df7d289eee8d3ed5ac012bf0c17d73d1859a4aa9a278e9f2" exitCode=0 Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.356331 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"91e3049bee861f37df7d289eee8d3ed5ac012bf0c17d73d1859a4aa9a278e9f2"} Mar 21 09:16:01 crc kubenswrapper[4932]: E0321 09:16:01.357719 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/openstack-k8s-operators/watcher-operator:3300cf73e2d3248addef0ca11278d270abd051a5\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" podUID="49da73f0-68f0-4d58-954a-f0c7132f3e9f" Mar 21 09:16:01 crc kubenswrapper[4932]: E0321 09:16:01.358520 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da\\\"\"" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" podUID="1e3fd98f-c5ac-4087-9a3d-a3aec1241774" Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.423437 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9twtl"] Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.431619 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9twtl"] Mar 21 09:16:01 crc kubenswrapper[4932]: I0321 09:16:01.717653 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" path="/var/lib/kubelet/pods/8eff25f7-98c9-4364-aceb-c5ce1578e66e/volumes" Mar 21 09:16:04 crc kubenswrapper[4932]: I0321 09:16:04.099262 4932 scope.go:117] "RemoveContainer" containerID="815c89554d6b04dc72b2decd86932aee03e5d5628f5b914c4fb15b873fd75936" Mar 21 09:16:04 crc kubenswrapper[4932]: I0321 09:16:04.324388 4932 scope.go:117] "RemoveContainer" containerID="2cecce4b5f5761a377c6f274580afd522cc3057cd2adfadb56e18eb03abf0e08" Mar 21 09:16:04 crc kubenswrapper[4932]: I0321 09:16:04.387778 4932 scope.go:117] "RemoveContainer" containerID="8decec33670c2842c92261f8dd47259d538fd63496f5f2522fb72de9d5a14bf4" Mar 21 09:16:04 crc kubenswrapper[4932]: E0321 09:16:04.388198 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cecce4b5f5761a377c6f274580afd522cc3057cd2adfadb56e18eb03abf0e08\": container with ID starting with 2cecce4b5f5761a377c6f274580afd522cc3057cd2adfadb56e18eb03abf0e08 not found: ID does not exist" containerID="2cecce4b5f5761a377c6f274580afd522cc3057cd2adfadb56e18eb03abf0e08" Mar 21 09:16:04 crc kubenswrapper[4932]: I0321 09:16:04.594134 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snd5x"] Mar 21 09:16:04 crc kubenswrapper[4932]: I0321 09:16:04.668009 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x"] Mar 21 09:16:04 crc kubenswrapper[4932]: I0321 09:16:04.791123 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568076-xwz4c"] Mar 21 09:16:04 crc kubenswrapper[4932]: I0321 09:16:04.801777 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn"] Mar 21 09:16:04 crc kubenswrapper[4932]: W0321 09:16:04.817262 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66456873_3ce6_4ccc_bc44_ef45d9c30821.slice/crio-3731e0ef19341f19a506a562aae2bc399b46f90544d1a53e2eac81400dbad235 WatchSource:0}: Error finding container 3731e0ef19341f19a506a562aae2bc399b46f90544d1a53e2eac81400dbad235: Status 404 returned error can't find the container with id 3731e0ef19341f19a506a562aae2bc399b46f90544d1a53e2eac81400dbad235 Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.409408 4932 generic.go:334] "Generic (PLEG): container finished" podID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerID="de9e72d6de64394010265bdfa3ab7b40ce8ca5a454c4eb6ab103fe847cf44d3c" exitCode=0 Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.409584 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snd5x" event={"ID":"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb","Type":"ContainerDied","Data":"de9e72d6de64394010265bdfa3ab7b40ce8ca5a454c4eb6ab103fe847cf44d3c"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.409746 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snd5x" event={"ID":"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb","Type":"ContainerStarted","Data":"b222b4c8640aa9bfd24ff02d4f9a876fb45766aa369c4c29385123837c15a072"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.421264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" event={"ID":"e041327c-c039-44fc-8fa6-c7e606e1bc56","Type":"ContainerStarted","Data":"924c3e3d367ce5928924c5a071837b213b6c73b4d64eedbd289fa1ad859594b1"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.421727 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.431692 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" event={"ID":"70cf1e28-3bbf-4c35-a013-e1a5bfab962f","Type":"ContainerStarted","Data":"c64a2970574d75bb439b0fc84c982773fddf80188178edc3a7fedb0cec94a489"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.444638 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" event={"ID":"66456873-3ce6-4ccc-bc44-ef45d9c30821","Type":"ContainerStarted","Data":"3731e0ef19341f19a506a562aae2bc399b46f90544d1a53e2eac81400dbad235"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.463701 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" event={"ID":"dd38650a-3a05-4fb1-bb79-3641a7f91024","Type":"ContainerStarted","Data":"fae13a523456ccbe4629a5426bd4ad754095aeb76ef337d7f714f40bc6063a64"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.464554 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" podStartSLOduration=6.720963558 podStartE2EDuration="28.464534823s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.949184568 +0000 UTC m=+1043.544382837" lastFinishedPulling="2026-03-21 09:16:01.692755833 +0000 UTC m=+1065.287954102" observedRunningTime="2026-03-21 09:16:05.463339808 +0000 UTC m=+1069.058538077" watchObservedRunningTime="2026-03-21 09:16:05.464534823 +0000 UTC m=+1069.059733092" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.478975 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" event={"ID":"15c187a9-b91f-4418-a63e-20fd4f52de2f","Type":"ContainerStarted","Data":"ac8f58508374fa4d8aff325601cfd4d685be9eaec64fb17ec9a9736d2fef4fbe"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.479159 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.487710 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" event={"ID":"137f9007-91ad-4db2-bd97-4fdb99ba1ecb","Type":"ContainerStarted","Data":"a9bac09a5656e508eada08504108ae456776a1e459e2f85ab5d3f33038f38e42"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.488498 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.504831 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" event={"ID":"8356f354-c7d8-4c5c-9db0-c5e458971c5d","Type":"ContainerStarted","Data":"4f7aea13b120530057d0e91e4ecb73c229f8d833cbd85786836c24bf178297ca"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.505626 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.514614 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" event={"ID":"e5984b5c-66b6-4141-ae47-b0b92f487355","Type":"ContainerStarted","Data":"befe1ee201c0d7462d4dc5a1a867380360b2a90bbd1dc18b10a037c22db2760b"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.515457 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.518937 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" podStartSLOduration=5.004441305 podStartE2EDuration="28.518919157s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.974331671 +0000 UTC m=+1043.569529940" lastFinishedPulling="2026-03-21 09:16:03.488809523 +0000 UTC m=+1067.084007792" observedRunningTime="2026-03-21 09:16:05.518229537 +0000 UTC m=+1069.113427806" watchObservedRunningTime="2026-03-21 09:16:05.518919157 +0000 UTC m=+1069.114117426" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.520737 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tlpd9" podStartSLOduration=3.280554339 podStartE2EDuration="27.520722749s" podCreationTimestamp="2026-03-21 09:15:38 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.964788526 +0000 UTC m=+1043.559986795" lastFinishedPulling="2026-03-21 09:16:04.204956936 +0000 UTC m=+1067.800155205" observedRunningTime="2026-03-21 09:16:05.499855368 +0000 UTC m=+1069.095053647" watchObservedRunningTime="2026-03-21 09:16:05.520722749 +0000 UTC m=+1069.115921018" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.526723 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" event={"ID":"49d38fa1-5a18-49d6-92e0-47942d410eba","Type":"ContainerStarted","Data":"e43ca06cc4ef4f08ec98f2f01c9038b0c3306a388065ad29fe221d52e35c5840"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.543628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" event={"ID":"ac0ffb78-05f2-4c27-a5c3-6020714b1792","Type":"ContainerStarted","Data":"c1988a9bda72041aebf88f3235540d3859bd41e54347671a8c89200e76d43b33"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.544486 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.558677 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" event={"ID":"83f39e87-13b5-4282-8d4f-820f1f80931b","Type":"ContainerStarted","Data":"888a9f5b61ffed0c308c9ea0a6863011c608290e24b45167bc90cdaf8268f751"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.559470 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.560715 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" podStartSLOduration=7.598251685 podStartE2EDuration="28.560703339s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.956479977 +0000 UTC m=+1043.551678246" lastFinishedPulling="2026-03-21 09:16:00.918931631 +0000 UTC m=+1064.514129900" observedRunningTime="2026-03-21 09:16:05.560542354 +0000 UTC m=+1069.155740623" watchObservedRunningTime="2026-03-21 09:16:05.560703339 +0000 UTC m=+1069.155901608" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.576262 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" event={"ID":"0f39b226-69b5-4dcf-b4eb-f92a2fc10261","Type":"ContainerStarted","Data":"038b51d2563ac4d1fda4b4f03d1376b8e22fd47ac7d06028027ff9795b5c5b00"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.576312 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.604104 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" event={"ID":"ad2f0a32-8261-4292-a063-13b65ab4ffe8","Type":"ContainerStarted","Data":"0128737c3c9c293a14c3389dc178d72dd389b8d2913e0f6de22c14a1dfc850b4"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.604852 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.612008 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" event={"ID":"1e153c9d-de71-4eea-9b33-4713472b3431","Type":"ContainerStarted","Data":"abbff56c168f8d5555f11a5e82970b023989a9c6c1b5e0e834b6136d17349224"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.612318 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.641796 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" event={"ID":"d6b79abd-6b81-44e4-89dd-3743dd7cc389","Type":"ContainerStarted","Data":"1a7f8c007b4939f90b662e7ec6ed8a511cebbd5797dec71072614dd33c6f495f"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.642613 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.648189 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" podStartSLOduration=3.6329343659999997 podStartE2EDuration="27.648172755s" podCreationTimestamp="2026-03-21 09:15:38 +0000 UTC" firstStartedPulling="2026-03-21 09:15:40.072144655 +0000 UTC m=+1043.667342924" lastFinishedPulling="2026-03-21 09:16:04.087383034 +0000 UTC m=+1067.682581313" observedRunningTime="2026-03-21 09:16:05.614425944 +0000 UTC m=+1069.209624233" watchObservedRunningTime="2026-03-21 09:16:05.648172755 +0000 UTC m=+1069.243371024" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.681270 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" podStartSLOduration=6.913490576 podStartE2EDuration="28.681250237s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.916387624 +0000 UTC m=+1043.511585893" lastFinishedPulling="2026-03-21 09:16:01.684147295 +0000 UTC m=+1065.279345554" observedRunningTime="2026-03-21 09:16:05.657510384 +0000 UTC m=+1069.252708653" watchObservedRunningTime="2026-03-21 09:16:05.681250237 +0000 UTC m=+1069.276448506" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.684001 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"62297762b526104c5e6a38e2d50dd142e250cf66f3aafdbfb83ac66a7c17e885"} Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.706984 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" podStartSLOduration=4.940120824 podStartE2EDuration="28.706969467s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:38.995958214 +0000 UTC m=+1042.591156483" lastFinishedPulling="2026-03-21 09:16:02.762806867 +0000 UTC m=+1066.358005126" observedRunningTime="2026-03-21 09:16:05.684793609 +0000 UTC m=+1069.279991878" watchObservedRunningTime="2026-03-21 09:16:05.706969467 +0000 UTC m=+1069.302167726" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.763491 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" podStartSLOduration=7.805094736 podStartE2EDuration="28.763474982s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.960422791 +0000 UTC m=+1043.555621060" lastFinishedPulling="2026-03-21 09:16:00.918803037 +0000 UTC m=+1064.514001306" observedRunningTime="2026-03-21 09:16:05.723713818 +0000 UTC m=+1069.318912077" watchObservedRunningTime="2026-03-21 09:16:05.763474982 +0000 UTC m=+1069.358673251" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.810401 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" podStartSLOduration=6.644944161 podStartE2EDuration="28.810383982s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:38.753489569 +0000 UTC m=+1042.348687838" lastFinishedPulling="2026-03-21 09:16:00.91892939 +0000 UTC m=+1064.514127659" observedRunningTime="2026-03-21 09:16:05.7832086 +0000 UTC m=+1069.378406889" watchObservedRunningTime="2026-03-21 09:16:05.810383982 +0000 UTC m=+1069.405582251" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.902927 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" podStartSLOduration=7.545494297 podStartE2EDuration="28.902904373s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.570462742 +0000 UTC m=+1043.165661011" lastFinishedPulling="2026-03-21 09:16:00.927872818 +0000 UTC m=+1064.523071087" observedRunningTime="2026-03-21 09:16:05.815003135 +0000 UTC m=+1069.410201404" watchObservedRunningTime="2026-03-21 09:16:05.902904373 +0000 UTC m=+1069.498102642" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.938850 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" podStartSLOduration=4.952517691 podStartE2EDuration="28.938828637s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:38.776503461 +0000 UTC m=+1042.371701730" lastFinishedPulling="2026-03-21 09:16:02.762814407 +0000 UTC m=+1066.358012676" observedRunningTime="2026-03-21 09:16:05.917941406 +0000 UTC m=+1069.513139675" watchObservedRunningTime="2026-03-21 09:16:05.938828637 +0000 UTC m=+1069.534026906" Mar 21 09:16:05 crc kubenswrapper[4932]: I0321 09:16:05.954639 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" podStartSLOduration=4.7623197390000005 podStartE2EDuration="28.954616642s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.99272819 +0000 UTC m=+1043.587926459" lastFinishedPulling="2026-03-21 09:16:04.185025093 +0000 UTC m=+1067.780223362" observedRunningTime="2026-03-21 09:16:05.937383246 +0000 UTC m=+1069.532581525" watchObservedRunningTime="2026-03-21 09:16:05.954616642 +0000 UTC m=+1069.549814911" Mar 21 09:16:06 crc kubenswrapper[4932]: I0321 09:16:06.705265 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snd5x" event={"ID":"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb","Type":"ContainerStarted","Data":"727018ad5556ba0a54cc529ca9821ed78487d923624d2468de59165b690b293e"} Mar 21 09:16:06 crc kubenswrapper[4932]: I0321 09:16:06.712390 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" event={"ID":"dd38650a-3a05-4fb1-bb79-3641a7f91024","Type":"ContainerStarted","Data":"6cd5a22856c0c7b4d3c4fc4a8cdbf1632916f16838179690db0af8a3fb36f136"} Mar 21 09:16:06 crc kubenswrapper[4932]: I0321 09:16:06.758999 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" podStartSLOduration=5.883088064 podStartE2EDuration="6.758982972s" podCreationTimestamp="2026-03-21 09:16:00 +0000 UTC" firstStartedPulling="2026-03-21 09:16:04.818154467 +0000 UTC m=+1068.413352736" lastFinishedPulling="2026-03-21 09:16:05.694049375 +0000 UTC m=+1069.289247644" observedRunningTime="2026-03-21 09:16:06.758898339 +0000 UTC m=+1070.354096608" watchObservedRunningTime="2026-03-21 09:16:06.758982972 +0000 UTC m=+1070.354181241" Mar 21 09:16:07 crc kubenswrapper[4932]: I0321 09:16:07.730844 4932 generic.go:334] "Generic (PLEG): container finished" podID="dd38650a-3a05-4fb1-bb79-3641a7f91024" containerID="6cd5a22856c0c7b4d3c4fc4a8cdbf1632916f16838179690db0af8a3fb36f136" exitCode=0 Mar 21 09:16:07 crc kubenswrapper[4932]: I0321 09:16:07.731190 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" event={"ID":"dd38650a-3a05-4fb1-bb79-3641a7f91024","Type":"ContainerDied","Data":"6cd5a22856c0c7b4d3c4fc4a8cdbf1632916f16838179690db0af8a3fb36f136"} Mar 21 09:16:07 crc kubenswrapper[4932]: I0321 09:16:07.754562 4932 generic.go:334] "Generic (PLEG): container finished" podID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerID="727018ad5556ba0a54cc529ca9821ed78487d923624d2468de59165b690b293e" exitCode=0 Mar 21 09:16:07 crc kubenswrapper[4932]: I0321 09:16:07.754734 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snd5x" event={"ID":"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb","Type":"ContainerDied","Data":"727018ad5556ba0a54cc529ca9821ed78487d923624d2468de59165b690b293e"} Mar 21 09:16:07 crc kubenswrapper[4932]: I0321 09:16:07.754813 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snd5x" event={"ID":"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb","Type":"ContainerStarted","Data":"eab492b8d29a0b17c0073fdd94a5ab7348697b6d95690f54217c869bcf8d1ea4"} Mar 21 09:16:07 crc kubenswrapper[4932]: I0321 09:16:07.816314 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-snd5x" podStartSLOduration=21.069452846 podStartE2EDuration="22.81629676s" podCreationTimestamp="2026-03-21 09:15:45 +0000 UTC" firstStartedPulling="2026-03-21 09:16:05.411087305 +0000 UTC m=+1069.006285574" lastFinishedPulling="2026-03-21 09:16:07.157931219 +0000 UTC m=+1070.753129488" observedRunningTime="2026-03-21 09:16:07.811686137 +0000 UTC m=+1071.406884416" watchObservedRunningTime="2026-03-21 09:16:07.81629676 +0000 UTC m=+1071.411495019" Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.229240 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.349277 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2wnr\" (UniqueName: \"kubernetes.io/projected/dd38650a-3a05-4fb1-bb79-3641a7f91024-kube-api-access-b2wnr\") pod \"dd38650a-3a05-4fb1-bb79-3641a7f91024\" (UID: \"dd38650a-3a05-4fb1-bb79-3641a7f91024\") " Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.356190 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd38650a-3a05-4fb1-bb79-3641a7f91024-kube-api-access-b2wnr" (OuterVolumeSpecName: "kube-api-access-b2wnr") pod "dd38650a-3a05-4fb1-bb79-3641a7f91024" (UID: "dd38650a-3a05-4fb1-bb79-3641a7f91024"). InnerVolumeSpecName "kube-api-access-b2wnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.450774 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2wnr\" (UniqueName: \"kubernetes.io/projected/dd38650a-3a05-4fb1-bb79-3641a7f91024-kube-api-access-b2wnr\") on node \"crc\" DevicePath \"\"" Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.773566 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" event={"ID":"dd38650a-3a05-4fb1-bb79-3641a7f91024","Type":"ContainerDied","Data":"fae13a523456ccbe4629a5426bd4ad754095aeb76ef337d7f714f40bc6063a64"} Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.773614 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae13a523456ccbe4629a5426bd4ad754095aeb76ef337d7f714f40bc6063a64" Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.773688 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568076-xwz4c" Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.825171 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568070-hk5lh"] Mar 21 09:16:09 crc kubenswrapper[4932]: I0321 09:16:09.839870 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568070-hk5lh"] Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.669118 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.678454 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/598a1e12-b105-41b7-93b5-123bd4f38dd9-webhook-certs\") pod \"openstack-operator-controller-manager-5c7b6d4df4-285rf\" (UID: \"598a1e12-b105-41b7-93b5-123bd4f38dd9\") " pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.769365 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.785806 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" event={"ID":"a3b11074-e78d-4f10-890f-d0c9dc1b4d46","Type":"ContainerStarted","Data":"b8cd7bfe060c342963871941e95827c850e00c36baafbb01b1a4161d42cc7ad1"} Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.787273 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.788824 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" event={"ID":"66456873-3ce6-4ccc-bc44-ef45d9c30821","Type":"ContainerStarted","Data":"1154cac5c3a592c9469446b7ed99692961adb8214caeedb390e42e8d5a1a62a3"} Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.789477 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.790797 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" event={"ID":"bc611f60-9bae-4e2a-a6f9-7f88221b7464","Type":"ContainerStarted","Data":"f0b7f15d610724e018c2a4970e92469c44061f009b12076a9f97512282d54532"} Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.791286 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.803813 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" event={"ID":"49d38fa1-5a18-49d6-92e0-47942d410eba","Type":"ContainerStarted","Data":"d4b350c206416dbbdf52f358838abd13637b0cc71b0212e2fae5fc3a094d711d"} Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.804683 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.865807 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" podStartSLOduration=3.7792906779999997 podStartE2EDuration="33.865792759s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.894603107 +0000 UTC m=+1043.489801376" lastFinishedPulling="2026-03-21 09:16:09.981105188 +0000 UTC m=+1073.576303457" observedRunningTime="2026-03-21 09:16:10.862102464 +0000 UTC m=+1074.457300733" watchObservedRunningTime="2026-03-21 09:16:10.865792759 +0000 UTC m=+1074.460991018" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.903042 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" podStartSLOduration=2.6946156439999998 podStartE2EDuration="33.903020021s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:38.996887651 +0000 UTC m=+1042.592085920" lastFinishedPulling="2026-03-21 09:16:10.205292028 +0000 UTC m=+1073.800490297" observedRunningTime="2026-03-21 09:16:10.894734612 +0000 UTC m=+1074.489932891" watchObservedRunningTime="2026-03-21 09:16:10.903020021 +0000 UTC m=+1074.498218290" Mar 21 09:16:10 crc kubenswrapper[4932]: I0321 09:16:10.939958 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" podStartSLOduration=28.663724023 podStartE2EDuration="33.939934253s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:16:04.704910649 +0000 UTC m=+1068.300108918" lastFinishedPulling="2026-03-21 09:16:09.981120879 +0000 UTC m=+1073.576319148" observedRunningTime="2026-03-21 09:16:10.931544471 +0000 UTC m=+1074.526742750" watchObservedRunningTime="2026-03-21 09:16:10.939934253 +0000 UTC m=+1074.535132522" Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.014824 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" podStartSLOduration=28.832795296 podStartE2EDuration="34.014801676s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:16:04.819591428 +0000 UTC m=+1068.414789697" lastFinishedPulling="2026-03-21 09:16:10.001597808 +0000 UTC m=+1073.596796077" observedRunningTime="2026-03-21 09:16:11.001841844 +0000 UTC m=+1074.597040113" watchObservedRunningTime="2026-03-21 09:16:11.014801676 +0000 UTC m=+1074.609999945" Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.271380 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf"] Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.714079 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2456e5f-1686-4196-b670-9e994a0d694f" path="/var/lib/kubelet/pods/a2456e5f-1686-4196-b670-9e994a0d694f/volumes" Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.813605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" event={"ID":"598a1e12-b105-41b7-93b5-123bd4f38dd9","Type":"ContainerStarted","Data":"ec7b3d39d1ac087c67475b30e2f9d3af3ab1604a8796005facb0d347e2dc6b92"} Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.813661 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" event={"ID":"598a1e12-b105-41b7-93b5-123bd4f38dd9","Type":"ContainerStarted","Data":"56332c7579c1b7b5378be8eb56e012e90f66cf2396940a404baedf47a33ca073"} Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.814832 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.816870 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" event={"ID":"015c7bce-9d22-47ec-90ac-049bbba07d7e","Type":"ContainerStarted","Data":"5f4b58e2168d3ae3c4b019db782bb2c0bcdc663aa628547324211d9218e939a9"} Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.817245 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.857956 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" podStartSLOduration=33.857937462 podStartE2EDuration="33.857937462s" podCreationTimestamp="2026-03-21 09:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:16:11.842217001 +0000 UTC m=+1075.437415270" watchObservedRunningTime="2026-03-21 09:16:11.857937462 +0000 UTC m=+1075.453135731" Mar 21 09:16:11 crc kubenswrapper[4932]: I0321 09:16:11.871204 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" podStartSLOduration=3.20823693 podStartE2EDuration="34.871183033s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.564156591 +0000 UTC m=+1043.159354860" lastFinishedPulling="2026-03-21 09:16:11.227102694 +0000 UTC m=+1074.822300963" observedRunningTime="2026-03-21 09:16:11.870932166 +0000 UTC m=+1075.466130435" watchObservedRunningTime="2026-03-21 09:16:11.871183033 +0000 UTC m=+1075.466381322" Mar 21 09:16:12 crc kubenswrapper[4932]: I0321 09:16:12.824421 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" event={"ID":"9dc780d5-ff2a-4d92-ba79-076f72964907","Type":"ContainerStarted","Data":"9b7a4d5ce3ce35ca8459f0861b186e1f34343b43ecc769683b3f4f8fc17874fb"} Mar 21 09:16:12 crc kubenswrapper[4932]: I0321 09:16:12.825480 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" Mar 21 09:16:12 crc kubenswrapper[4932]: I0321 09:16:12.827372 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" event={"ID":"49da73f0-68f0-4d58-954a-f0c7132f3e9f","Type":"ContainerStarted","Data":"42766db262858d50eca97dc5e50440c677c712556b888c9e236f29f21d215615"} Mar 21 09:16:12 crc kubenswrapper[4932]: I0321 09:16:12.853717 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" podStartSLOduration=3.606933991 podStartE2EDuration="35.853692169s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.957795296 +0000 UTC m=+1043.552993565" lastFinishedPulling="2026-03-21 09:16:12.204553474 +0000 UTC m=+1075.799751743" observedRunningTime="2026-03-21 09:16:12.84783504 +0000 UTC m=+1076.443033329" watchObservedRunningTime="2026-03-21 09:16:12.853692169 +0000 UTC m=+1076.448890438" Mar 21 09:16:14 crc kubenswrapper[4932]: I0321 09:16:14.299933 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" event={"ID":"548b3963-aca5-475e-b79d-7d9870d11155","Type":"ContainerStarted","Data":"53efc9465fd0fbb3ad58c39eafb72f7ee527fba2230bc387438d5a0ebd30ed1b"} Mar 21 09:16:14 crc kubenswrapper[4932]: I0321 09:16:14.300458 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" Mar 21 09:16:14 crc kubenswrapper[4932]: I0321 09:16:14.304618 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" event={"ID":"23de4e40-dece-4cf2-a0f2-60fdcd2c7588","Type":"ContainerStarted","Data":"05074ac498fcae31ca19ce46fa1aaf78aa1afb695caeb5522c6b806851b51d62"} Mar 21 09:16:14 crc kubenswrapper[4932]: I0321 09:16:14.305780 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" Mar 21 09:16:14 crc kubenswrapper[4932]: I0321 09:16:14.321256 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" podStartSLOduration=3.212279406 podStartE2EDuration="37.321240028s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.128277911 +0000 UTC m=+1042.723476180" lastFinishedPulling="2026-03-21 09:16:13.237238533 +0000 UTC m=+1076.832436802" observedRunningTime="2026-03-21 09:16:14.317706007 +0000 UTC m=+1077.912904276" watchObservedRunningTime="2026-03-21 09:16:14.321240028 +0000 UTC m=+1077.916438287" Mar 21 09:16:14 crc kubenswrapper[4932]: I0321 09:16:14.321685 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" podStartSLOduration=4.500275348 podStartE2EDuration="36.321681351s" podCreationTimestamp="2026-03-21 09:15:38 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.961586274 +0000 UTC m=+1043.556784543" lastFinishedPulling="2026-03-21 09:16:11.782992277 +0000 UTC m=+1075.378190546" observedRunningTime="2026-03-21 09:16:12.871146491 +0000 UTC m=+1076.466344770" watchObservedRunningTime="2026-03-21 09:16:14.321681351 +0000 UTC m=+1077.916879620" Mar 21 09:16:14 crc kubenswrapper[4932]: I0321 09:16:14.336044 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" podStartSLOduration=4.052616652 podStartE2EDuration="37.336022344s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.95239079 +0000 UTC m=+1043.547589059" lastFinishedPulling="2026-03-21 09:16:13.235796482 +0000 UTC m=+1076.830994751" observedRunningTime="2026-03-21 09:16:14.332193404 +0000 UTC m=+1077.927391673" watchObservedRunningTime="2026-03-21 09:16:14.336022344 +0000 UTC m=+1077.931220613" Mar 21 09:16:15 crc kubenswrapper[4932]: I0321 09:16:15.874844 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:16:15 crc kubenswrapper[4932]: I0321 09:16:15.875166 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:16:15 crc kubenswrapper[4932]: I0321 09:16:15.923540 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:16:16 crc kubenswrapper[4932]: I0321 09:16:16.370984 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:16:16 crc kubenswrapper[4932]: I0321 09:16:16.440181 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snd5x"] Mar 21 09:16:17 crc kubenswrapper[4932]: I0321 09:16:17.873835 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4jn2b" Mar 21 09:16:17 crc kubenswrapper[4932]: I0321 09:16:17.886697 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-b6g8p" Mar 21 09:16:17 crc kubenswrapper[4932]: I0321 09:16:17.939476 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-n42t7" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.021408 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kbdqg" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.067637 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-mktkt" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.112632 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-r8qnx" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.238093 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-m25c9" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.286859 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-xshm8" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.293080 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-b4jmj" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.317180 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-nlr7d" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.336008 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-snd5x" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerName="registry-server" containerID="cri-o://eab492b8d29a0b17c0073fdd94a5ab7348697b6d95690f54217c869bcf8d1ea4" gracePeriod=2 Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.356149 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-6hl6p" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.476297 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-dt528" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.485914 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-5dgnz" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.575939 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-gh8vr" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.601164 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-w84fw" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.690448 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-2qld2" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.817074 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-dbgzf" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.940283 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" Mar 21 09:16:18 crc kubenswrapper[4932]: I0321 09:16:18.945188 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7d7cfb649d-wl9g6" Mar 21 09:16:19 crc kubenswrapper[4932]: I0321 09:16:19.348016 4932 generic.go:334] "Generic (PLEG): container finished" podID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerID="eab492b8d29a0b17c0073fdd94a5ab7348697b6d95690f54217c869bcf8d1ea4" exitCode=0 Mar 21 09:16:19 crc kubenswrapper[4932]: I0321 09:16:19.348071 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snd5x" event={"ID":"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb","Type":"ContainerDied","Data":"eab492b8d29a0b17c0073fdd94a5ab7348697b6d95690f54217c869bcf8d1ea4"} Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.382822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snd5x" event={"ID":"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb","Type":"ContainerDied","Data":"b222b4c8640aa9bfd24ff02d4f9a876fb45766aa369c4c29385123837c15a072"} Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.383994 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b222b4c8640aa9bfd24ff02d4f9a876fb45766aa369c4c29385123837c15a072" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.386796 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" event={"ID":"1e3fd98f-c5ac-4087-9a3d-a3aec1241774","Type":"ContainerStarted","Data":"8fca1ceae16c1786c5a49d5de012550751a4869e00cc100fc031dadb91b8a2c1"} Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.387076 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.388565 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.407197 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" podStartSLOduration=2.77551198 podStartE2EDuration="43.407178493s" podCreationTimestamp="2026-03-21 09:15:37 +0000 UTC" firstStartedPulling="2026-03-21 09:15:39.553181665 +0000 UTC m=+1043.148379934" lastFinishedPulling="2026-03-21 09:16:20.184848178 +0000 UTC m=+1083.780046447" observedRunningTime="2026-03-21 09:16:20.401427528 +0000 UTC m=+1083.996625797" watchObservedRunningTime="2026-03-21 09:16:20.407178493 +0000 UTC m=+1084.002376762" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.506474 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-utilities\") pod \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.506524 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f52v\" (UniqueName: \"kubernetes.io/projected/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-kube-api-access-6f52v\") pod \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.506571 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-catalog-content\") pod \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\" (UID: \"fe32ec93-beef-49ae-8c3a-d713f1d5fcfb\") " Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.507413 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-utilities" (OuterVolumeSpecName: "utilities") pod "fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" (UID: "fe32ec93-beef-49ae-8c3a-d713f1d5fcfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.513552 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-kube-api-access-6f52v" (OuterVolumeSpecName: "kube-api-access-6f52v") pod "fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" (UID: "fe32ec93-beef-49ae-8c3a-d713f1d5fcfb"). InnerVolumeSpecName "kube-api-access-6f52v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.535633 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" (UID: "fe32ec93-beef-49ae-8c3a-d713f1d5fcfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.608187 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.608218 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f52v\" (UniqueName: \"kubernetes.io/projected/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-kube-api-access-6f52v\") on node \"crc\" DevicePath \"\"" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.608232 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.687093 4932 scope.go:117] "RemoveContainer" containerID="be9ecffb1050ac9442a7a7870506ce53c03a1c7bf67b12e486b1f8295e3c449e" Mar 21 09:16:20 crc kubenswrapper[4932]: I0321 09:16:20.775990 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5c7b6d4df4-285rf" Mar 21 09:16:21 crc kubenswrapper[4932]: I0321 09:16:21.394219 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snd5x" Mar 21 09:16:21 crc kubenswrapper[4932]: I0321 09:16:21.425788 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snd5x"] Mar 21 09:16:21 crc kubenswrapper[4932]: I0321 09:16:21.433099 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-snd5x"] Mar 21 09:16:21 crc kubenswrapper[4932]: E0321 09:16:21.532102 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe32ec93_beef_49ae_8c3a_d713f1d5fcfb.slice/crio-b222b4c8640aa9bfd24ff02d4f9a876fb45766aa369c4c29385123837c15a072\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe32ec93_beef_49ae_8c3a_d713f1d5fcfb.slice\": RecentStats: unable to find data in memory cache]" Mar 21 09:16:21 crc kubenswrapper[4932]: I0321 09:16:21.715154 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" path="/var/lib/kubelet/pods/fe32ec93-beef-49ae-8c3a-d713f1d5fcfb/volumes" Mar 21 09:16:24 crc kubenswrapper[4932]: I0321 09:16:24.038760 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-xg5gn" Mar 21 09:16:24 crc kubenswrapper[4932]: I0321 09:16:24.065781 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-zf69x" Mar 21 09:16:28 crc kubenswrapper[4932]: I0321 09:16:28.266179 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-gjtn7" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.877075 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz"] Mar 21 09:16:46 crc kubenswrapper[4932]: E0321 09:16:46.878004 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="registry-server" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878019 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="registry-server" Mar 21 09:16:46 crc kubenswrapper[4932]: E0321 09:16:46.878039 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="extract-utilities" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878047 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="extract-utilities" Mar 21 09:16:46 crc kubenswrapper[4932]: E0321 09:16:46.878058 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="extract-content" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878066 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="extract-content" Mar 21 09:16:46 crc kubenswrapper[4932]: E0321 09:16:46.878088 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd38650a-3a05-4fb1-bb79-3641a7f91024" containerName="oc" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878096 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd38650a-3a05-4fb1-bb79-3641a7f91024" containerName="oc" Mar 21 09:16:46 crc kubenswrapper[4932]: E0321 09:16:46.878129 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerName="registry-server" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878138 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerName="registry-server" Mar 21 09:16:46 crc kubenswrapper[4932]: E0321 09:16:46.878223 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerName="extract-content" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878251 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerName="extract-content" Mar 21 09:16:46 crc kubenswrapper[4932]: E0321 09:16:46.878697 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerName="extract-utilities" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878708 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerName="extract-utilities" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878905 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe32ec93-beef-49ae-8c3a-d713f1d5fcfb" containerName="registry-server" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.878928 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eff25f7-98c9-4364-aceb-c5ce1578e66e" containerName="registry-server" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.879060 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd38650a-3a05-4fb1-bb79-3641a7f91024" containerName="oc" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.881052 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.884027 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6bjwq" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.884610 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.884773 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.885005 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.888036 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz"] Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.920976 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-khmhg"] Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.922206 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.930751 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 21 09:16:46 crc kubenswrapper[4932]: I0321 09:16:46.937746 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-khmhg"] Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.012942 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kncwz\" (UniqueName: \"kubernetes.io/projected/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-kube-api-access-kncwz\") pod \"dnsmasq-dns-6cf7b9b6b9-f5lfz\" (UID: \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.013004 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-config\") pod \"dnsmasq-dns-6cf7b9b6b9-f5lfz\" (UID: \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.115654 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhs5h\" (UniqueName: \"kubernetes.io/projected/a41cc7c9-0b0e-4f5d-823d-57868a845006-kube-api-access-jhs5h\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.115699 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-config\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.115726 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-dns-svc\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.115751 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-config\") pod \"dnsmasq-dns-6cf7b9b6b9-f5lfz\" (UID: \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.115767 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kncwz\" (UniqueName: \"kubernetes.io/projected/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-kube-api-access-kncwz\") pod \"dnsmasq-dns-6cf7b9b6b9-f5lfz\" (UID: \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.117000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-config\") pod \"dnsmasq-dns-6cf7b9b6b9-f5lfz\" (UID: \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.134073 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kncwz\" (UniqueName: \"kubernetes.io/projected/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-kube-api-access-kncwz\") pod \"dnsmasq-dns-6cf7b9b6b9-f5lfz\" (UID: \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\") " pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.203097 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.216467 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-config\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.216506 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhs5h\" (UniqueName: \"kubernetes.io/projected/a41cc7c9-0b0e-4f5d-823d-57868a845006-kube-api-access-jhs5h\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.216539 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-dns-svc\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.217487 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-dns-svc\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.217540 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-config\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.234381 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhs5h\" (UniqueName: \"kubernetes.io/projected/a41cc7c9-0b0e-4f5d-823d-57868a845006-kube-api-access-jhs5h\") pod \"dnsmasq-dns-5f48d6b889-khmhg\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.281917 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.674254 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz"] Mar 21 09:16:47 crc kubenswrapper[4932]: I0321 09:16:47.762960 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-khmhg"] Mar 21 09:16:47 crc kubenswrapper[4932]: W0321 09:16:47.770559 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda41cc7c9_0b0e_4f5d_823d_57868a845006.slice/crio-9b34ca8855423e0c7bec6ec1b09d9af3cf49461b3d97e99605b57713fbbf3d21 WatchSource:0}: Error finding container 9b34ca8855423e0c7bec6ec1b09d9af3cf49461b3d97e99605b57713fbbf3d21: Status 404 returned error can't find the container with id 9b34ca8855423e0c7bec6ec1b09d9af3cf49461b3d97e99605b57713fbbf3d21 Mar 21 09:16:48 crc kubenswrapper[4932]: I0321 09:16:48.573466 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" event={"ID":"e83cf6b3-c87b-4e81-aff6-4ec5151c5692","Type":"ContainerStarted","Data":"b365e0757cd5853dcaa04ab7be66beabad26846a71937621847a6c197e276d60"} Mar 21 09:16:48 crc kubenswrapper[4932]: I0321 09:16:48.576168 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" event={"ID":"a41cc7c9-0b0e-4f5d-823d-57868a845006","Type":"ContainerStarted","Data":"9b34ca8855423e0c7bec6ec1b09d9af3cf49461b3d97e99605b57713fbbf3d21"} Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.592255 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-khmhg"] Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.635402 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-766cdc564f-k86fp"] Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.636635 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.655286 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766cdc564f-k86fp"] Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.697225 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-dns-svc\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.697301 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-config\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.697362 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh2c4\" (UniqueName: \"kubernetes.io/projected/668a51ac-eb24-4148-8416-2c543cc983aa-kube-api-access-bh2c4\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.798983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-dns-svc\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.799157 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-config\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.799263 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh2c4\" (UniqueName: \"kubernetes.io/projected/668a51ac-eb24-4148-8416-2c543cc983aa-kube-api-access-bh2c4\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.800113 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-dns-svc\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.800855 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-config\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.825194 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh2c4\" (UniqueName: \"kubernetes.io/projected/668a51ac-eb24-4148-8416-2c543cc983aa-kube-api-access-bh2c4\") pod \"dnsmasq-dns-766cdc564f-k86fp\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:50 crc kubenswrapper[4932]: I0321 09:16:50.958684 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.003787 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.053393 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bc4895fdf-tlpfr"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.054556 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.082390 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bc4895fdf-tlpfr"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.115370 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-dns-svc\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.115674 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-config\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.115715 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d76j\" (UniqueName: \"kubernetes.io/projected/8aa503b9-d693-4732-9c87-86ac0692fc90-kube-api-access-4d76j\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.220580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-config\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.220914 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d76j\" (UniqueName: \"kubernetes.io/projected/8aa503b9-d693-4732-9c87-86ac0692fc90-kube-api-access-4d76j\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.220971 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-dns-svc\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.222096 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-dns-svc\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.223255 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-config\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.253682 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d76j\" (UniqueName: \"kubernetes.io/projected/8aa503b9-d693-4732-9c87-86ac0692fc90-kube-api-access-4d76j\") pod \"dnsmasq-dns-5bc4895fdf-tlpfr\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.442972 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bc4895fdf-tlpfr"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.444094 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.471508 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.473253 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.490077 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.561955 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbqj9"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.571369 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.581458 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbqj9"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.632940 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-config\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.633002 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-dns-svc\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.633097 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t76t\" (UniqueName: \"kubernetes.io/projected/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-kube-api-access-5t76t\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.686737 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-766cdc564f-k86fp"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.734471 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-utilities\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.736042 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t76t\" (UniqueName: \"kubernetes.io/projected/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-kube-api-access-5t76t\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.736147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-config\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.736180 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw72v\" (UniqueName: \"kubernetes.io/projected/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-kube-api-access-fw72v\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.736241 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-dns-svc\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.736337 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-catalog-content\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.737542 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-config\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.737782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-dns-svc\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.760316 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t76t\" (UniqueName: \"kubernetes.io/projected/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-kube-api-access-5t76t\") pod \"dnsmasq-dns-5fdbdbc8cc-z8kx2\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.794953 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.803329 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.805131 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.806445 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.806671 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.806904 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.808308 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.808391 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.808500 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.811987 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fhlh8" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.833012 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.837146 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-utilities\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.837211 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw72v\" (UniqueName: \"kubernetes.io/projected/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-kube-api-access-fw72v\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.837270 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-catalog-content\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.837698 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-catalog-content\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.837911 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-utilities\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.884135 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw72v\" (UniqueName: \"kubernetes.io/projected/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-kube-api-access-fw72v\") pod \"community-operators-vbqj9\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.905296 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.939247 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.939290 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.939312 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.939448 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.939477 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.939533 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.939582 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hts\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-kube-api-access-r5hts\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.939609 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.940721 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.941312 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-config-data\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:51 crc kubenswrapper[4932]: I0321 09:16:51.941537 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042569 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5hts\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-kube-api-access-r5hts\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042615 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042665 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042694 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-config-data\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042732 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042782 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042820 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042839 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042855 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.042887 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.045195 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.061229 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.065481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.065968 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.066746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-config-data\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.068013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.069184 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5hts\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-kube-api-access-r5hts\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.069881 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.070124 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.070322 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.079166 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/debe0e76-6d3a-402f-af21-a3ba7ceb5a24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.094895 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"debe0e76-6d3a-402f-af21-a3ba7ceb5a24\") " pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.122824 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bc4895fdf-tlpfr"] Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.123204 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.216215 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.218414 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.222833 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.223066 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.223261 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.223466 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.223639 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.223844 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.224131 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lcfkf" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.241338 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.407454 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.407657 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.407770 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwm6b\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-kube-api-access-bwm6b\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.407939 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.408080 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52bf7d16-ddac-464e-aca0-7756f5a9f696-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.408128 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.408157 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.408216 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52bf7d16-ddac-464e-aca0-7756f5a9f696-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.408260 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.408286 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.430836 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.483117 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2"] Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.490817 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbqj9"] Mar 21 09:16:52 crc kubenswrapper[4932]: W0321 09:16:52.531657 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2886ddaa_5c25_4044_8cbb_8a9249c32ee4.slice/crio-4f0f3d7e33de3c76345f1a5958ba24721dd9b07ce2a616b0b7addd4a7c55a027 WatchSource:0}: Error finding container 4f0f3d7e33de3c76345f1a5958ba24721dd9b07ce2a616b0b7addd4a7c55a027: Status 404 returned error can't find the container with id 4f0f3d7e33de3c76345f1a5958ba24721dd9b07ce2a616b0b7addd4a7c55a027 Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532621 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532653 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52bf7d16-ddac-464e-aca0-7756f5a9f696-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532681 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532703 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532730 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52bf7d16-ddac-464e-aca0-7756f5a9f696-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532756 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532778 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532800 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532818 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532843 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.532881 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwm6b\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-kube-api-access-bwm6b\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.539211 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.553507 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.553788 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.554965 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.555111 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.558611 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.558724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52bf7d16-ddac-464e-aca0-7756f5a9f696-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.559232 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.559644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52bf7d16-ddac-464e-aca0-7756f5a9f696-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.560109 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52bf7d16-ddac-464e-aca0-7756f5a9f696-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.569724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwm6b\" (UniqueName: \"kubernetes.io/projected/52bf7d16-ddac-464e-aca0-7756f5a9f696-kube-api-access-bwm6b\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.599581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52bf7d16-ddac-464e-aca0-7756f5a9f696\") " pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.607132 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.608737 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.611990 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.612611 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.612701 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.612852 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.616694 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.617584 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-fssgg" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.618553 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.632314 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.668261 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" event={"ID":"5023c0a6-e1c9-424f-b72b-b3da9afd59ce","Type":"ContainerStarted","Data":"0ea4ec4af29e02eb7720f8946261f215287a4ff0ec893892bdc832d2c5136da0"} Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.683127 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" event={"ID":"8aa503b9-d693-4732-9c87-86ac0692fc90","Type":"ContainerStarted","Data":"10ae35fa6ff69fe09ebe8b47294d64044dabfe6146ae359224f7b3026d406700"} Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.686599 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.707975 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqj9" event={"ID":"2886ddaa-5c25-4044-8cbb-8a9249c32ee4","Type":"ContainerStarted","Data":"4f0f3d7e33de3c76345f1a5958ba24721dd9b07ce2a616b0b7addd4a7c55a027"} Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735323 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735416 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735456 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735476 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735499 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07d3d99e-014e-4924-827a-f3e2f87774c6-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735517 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszxk\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-kube-api-access-gszxk\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735541 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735559 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735554 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766cdc564f-k86fp" event={"ID":"668a51ac-eb24-4148-8416-2c543cc983aa","Type":"ContainerStarted","Data":"87ced9610dc691595e52c355beeadbadfd71b41999481170014f30a1bec3c9e9"} Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.735576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07d3d99e-014e-4924-827a-f3e2f87774c6-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838524 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838572 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838611 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07d3d99e-014e-4924-827a-f3e2f87774c6-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gszxk\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-kube-api-access-gszxk\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838672 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838693 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07d3d99e-014e-4924-827a-f3e2f87774c6-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838760 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838779 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838800 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.838819 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.839043 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.839992 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.840053 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.840807 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.842033 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.842327 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07d3d99e-014e-4924-827a-f3e2f87774c6-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.850835 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07d3d99e-014e-4924-827a-f3e2f87774c6-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.855739 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.855930 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.860099 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07d3d99e-014e-4924-827a-f3e2f87774c6-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.882826 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.888074 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gszxk\" (UniqueName: \"kubernetes.io/projected/07d3d99e-014e-4924-827a-f3e2f87774c6-kube-api-access-gszxk\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.926618 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"07d3d99e-014e-4924-827a-f3e2f87774c6\") " pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:52 crc kubenswrapper[4932]: I0321 09:16:52.964641 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.610498 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.691697 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 21 09:16:53 crc kubenswrapper[4932]: W0321 09:16:53.705923 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07d3d99e_014e_4924_827a_f3e2f87774c6.slice/crio-a5137b31ed73e796f299417e03cd0d73bc32d2a10fed4490401bb23568134f61 WatchSource:0}: Error finding container a5137b31ed73e796f299417e03cd0d73bc32d2a10fed4490401bb23568134f61: Status 404 returned error can't find the container with id a5137b31ed73e796f299417e03cd0d73bc32d2a10fed4490401bb23568134f61 Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.760687 4932 generic.go:334] "Generic (PLEG): container finished" podID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerID="8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd" exitCode=0 Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.760776 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqj9" event={"ID":"2886ddaa-5c25-4044-8cbb-8a9249c32ee4","Type":"ContainerDied","Data":"8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd"} Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.770398 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52bf7d16-ddac-464e-aca0-7756f5a9f696","Type":"ContainerStarted","Data":"6c91230b52414a70efba9b872eac7caef3f5958ce287c058526946aeb65846de"} Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.789826 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"debe0e76-6d3a-402f-af21-a3ba7ceb5a24","Type":"ContainerStarted","Data":"2219c800d4522a25f131b4b10423cf927ef2f99aa5d6bb95b3f121d08b61547c"} Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.801968 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"07d3d99e-014e-4924-827a-f3e2f87774c6","Type":"ContainerStarted","Data":"a5137b31ed73e796f299417e03cd0d73bc32d2a10fed4490401bb23568134f61"} Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.854153 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.863420 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.865736 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.866041 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.866238 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-t7zdc" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.866478 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.867438 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.902247 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.963865 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616e853e-7b43-435c-b3fd-beaaa89779ff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.963908 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-config-data-default\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.963953 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.963993 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.964027 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616e853e-7b43-435c-b3fd-beaaa89779ff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.964048 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sn8b\" (UniqueName: \"kubernetes.io/projected/616e853e-7b43-435c-b3fd-beaaa89779ff-kube-api-access-4sn8b\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.964071 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-kolla-config\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:53 crc kubenswrapper[4932]: I0321 09:16:53.964090 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616e853e-7b43-435c-b3fd-beaaa89779ff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.065570 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616e853e-7b43-435c-b3fd-beaaa89779ff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.065613 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sn8b\" (UniqueName: \"kubernetes.io/projected/616e853e-7b43-435c-b3fd-beaaa89779ff-kube-api-access-4sn8b\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.065642 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-kolla-config\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.065662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616e853e-7b43-435c-b3fd-beaaa89779ff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.065702 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616e853e-7b43-435c-b3fd-beaaa89779ff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.065719 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-config-data-default\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.065750 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.065786 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.066159 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.068253 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-kolla-config\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.069131 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-config-data-default\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.069660 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/616e853e-7b43-435c-b3fd-beaaa89779ff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.073821 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616e853e-7b43-435c-b3fd-beaaa89779ff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.089896 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/616e853e-7b43-435c-b3fd-beaaa89779ff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.099789 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/616e853e-7b43-435c-b3fd-beaaa89779ff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.127203 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sn8b\" (UniqueName: \"kubernetes.io/projected/616e853e-7b43-435c-b3fd-beaaa89779ff-kube-api-access-4sn8b\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.187408 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"616e853e-7b43-435c-b3fd-beaaa89779ff\") " pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.217862 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.691129 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 21 09:16:54 crc kubenswrapper[4932]: W0321 09:16:54.726905 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616e853e_7b43_435c_b3fd_beaaa89779ff.slice/crio-4ab05e54cf9c5a776eea8e1a255efb786060bab4b425d8d449a1e10426e70882 WatchSource:0}: Error finding container 4ab05e54cf9c5a776eea8e1a255efb786060bab4b425d8d449a1e10426e70882: Status 404 returned error can't find the container with id 4ab05e54cf9c5a776eea8e1a255efb786060bab4b425d8d449a1e10426e70882 Mar 21 09:16:54 crc kubenswrapper[4932]: I0321 09:16:54.815893 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"616e853e-7b43-435c-b3fd-beaaa89779ff","Type":"ContainerStarted","Data":"4ab05e54cf9c5a776eea8e1a255efb786060bab4b425d8d449a1e10426e70882"} Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.013039 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.014632 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.018522 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.019570 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-qm79v" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.019736 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.021655 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.027520 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.195382 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.195449 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.195528 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.195577 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.195602 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.195633 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.195661 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgvj\" (UniqueName: \"kubernetes.io/projected/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-kube-api-access-glgvj\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.195697 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.301643 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.302337 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.302426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.302795 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.303023 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgvj\" (UniqueName: \"kubernetes.io/projected/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-kube-api-access-glgvj\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.303229 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.303519 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.303564 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.304267 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.304515 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.305653 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.308833 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.311860 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.315298 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.324730 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgvj\" (UniqueName: \"kubernetes.io/projected/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-kube-api-access-glgvj\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.363228 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.367856 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.368069 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba\") " pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.369310 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.372258 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qpw4c" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.372335 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.372779 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.391253 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.506530 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b1d914e-2d0b-4aa6-a863-04496a5acb61-kolla-config\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.506575 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b1d914e-2d0b-4aa6-a863-04496a5acb61-config-data\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.506632 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1d914e-2d0b-4aa6-a863-04496a5acb61-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.506664 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1d914e-2d0b-4aa6-a863-04496a5acb61-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.506913 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9ntf\" (UniqueName: \"kubernetes.io/projected/9b1d914e-2d0b-4aa6-a863-04496a5acb61-kube-api-access-s9ntf\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.608856 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b1d914e-2d0b-4aa6-a863-04496a5acb61-kolla-config\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.609262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b1d914e-2d0b-4aa6-a863-04496a5acb61-config-data\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.609370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1d914e-2d0b-4aa6-a863-04496a5acb61-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.609419 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1d914e-2d0b-4aa6-a863-04496a5acb61-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.609477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9ntf\" (UniqueName: \"kubernetes.io/projected/9b1d914e-2d0b-4aa6-a863-04496a5acb61-kube-api-access-s9ntf\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.609584 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9b1d914e-2d0b-4aa6-a863-04496a5acb61-kolla-config\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.611449 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b1d914e-2d0b-4aa6-a863-04496a5acb61-config-data\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.614051 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1d914e-2d0b-4aa6-a863-04496a5acb61-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.614086 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1d914e-2d0b-4aa6-a863-04496a5acb61-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.628049 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9ntf\" (UniqueName: \"kubernetes.io/projected/9b1d914e-2d0b-4aa6-a863-04496a5acb61-kube-api-access-s9ntf\") pod \"memcached-0\" (UID: \"9b1d914e-2d0b-4aa6-a863-04496a5acb61\") " pod="openstack/memcached-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.644760 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 21 09:16:55 crc kubenswrapper[4932]: I0321 09:16:55.741800 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 21 09:16:57 crc kubenswrapper[4932]: I0321 09:16:57.986976 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:16:57 crc kubenswrapper[4932]: I0321 09:16:57.992664 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 21 09:16:57 crc kubenswrapper[4932]: I0321 09:16:57.996424 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-crv57" Mar 21 09:16:58 crc kubenswrapper[4932]: I0321 09:16:58.034447 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:16:58 crc kubenswrapper[4932]: I0321 09:16:58.074402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4v62\" (UniqueName: \"kubernetes.io/projected/765d61b5-f144-4784-8c7d-ac497a6b6cba-kube-api-access-r4v62\") pod \"kube-state-metrics-0\" (UID: \"765d61b5-f144-4784-8c7d-ac497a6b6cba\") " pod="openstack/kube-state-metrics-0" Mar 21 09:16:58 crc kubenswrapper[4932]: I0321 09:16:58.178722 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4v62\" (UniqueName: \"kubernetes.io/projected/765d61b5-f144-4784-8c7d-ac497a6b6cba-kube-api-access-r4v62\") pod \"kube-state-metrics-0\" (UID: \"765d61b5-f144-4784-8c7d-ac497a6b6cba\") " pod="openstack/kube-state-metrics-0" Mar 21 09:16:58 crc kubenswrapper[4932]: I0321 09:16:58.197764 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4v62\" (UniqueName: \"kubernetes.io/projected/765d61b5-f144-4784-8c7d-ac497a6b6cba-kube-api-access-r4v62\") pod \"kube-state-metrics-0\" (UID: \"765d61b5-f144-4784-8c7d-ac497a6b6cba\") " pod="openstack/kube-state-metrics-0" Mar 21 09:16:58 crc kubenswrapper[4932]: I0321 09:16:58.332146 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.625896 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.634598 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.638209 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.638443 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.638217 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.639372 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.641018 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.641368 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.641507 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.642522 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7sg4x" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.645227 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711573 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711672 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711718 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711735 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711768 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tf2\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-kube-api-access-74tf2\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711818 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711842 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711871 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec155900-6777-4362-8c9c-ea98a8e245a8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.711913 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813502 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813567 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813623 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813678 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tf2\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-kube-api-access-74tf2\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813708 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813749 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813781 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec155900-6777-4362-8c9c-ea98a8e245a8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813844 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813884 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.813948 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.815403 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.819742 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.819759 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.819915 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.820512 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.832523 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.832759 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2fe0a73783cbe795f5f78fb4762619a3b18dc91982a9d49dcd3d68ffc16f7f99/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.840857 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.850725 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.853864 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec155900-6777-4362-8c9c-ea98a8e245a8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.856022 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tf2\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-kube-api-access-74tf2\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:16:59 crc kubenswrapper[4932]: I0321 09:16:59.938228 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"prometheus-metric-storage-0\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:17:00 crc kubenswrapper[4932]: I0321 09:17:00.007226 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.185040 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.186926 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.189980 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zxl6k" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.190052 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.190229 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.192040 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.194308 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.202220 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.357061 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.357118 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.357142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.357167 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-config\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.357206 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.357236 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp72d\" (UniqueName: \"kubernetes.io/projected/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-kube-api-access-pp72d\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.357275 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.357322 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.459473 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.459595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp72d\" (UniqueName: \"kubernetes.io/projected/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-kube-api-access-pp72d\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.459639 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.459766 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.459802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.459851 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.459870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.459892 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-config\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.460699 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.460820 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.464890 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-config\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.465106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.465378 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.470063 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.475242 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.486634 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp72d\" (UniqueName: \"kubernetes.io/projected/ff5058f9-6f1d-412e-a4c1-c12b67a26b41-kube-api-access-pp72d\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.494091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ff5058f9-6f1d-412e-a4c1-c12b67a26b41\") " pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.543213 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vk8zs"] Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.547128 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.550596 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.553066 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.553270 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.555182 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-n5s5x" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.560161 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kdvp8"] Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.562549 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.576913 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vk8zs"] Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.578028 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kdvp8"] Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.665320 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g44hw\" (UniqueName: \"kubernetes.io/projected/9874749a-2839-4a08-bf7a-8e99d3c745a5-kube-api-access-g44hw\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.665396 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/86467dc0-186a-407d-b23b-5f1cc14a54ec-ovn-controller-tls-certs\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.665420 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-lib\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.665441 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-run-ovn\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.665459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86467dc0-186a-407d-b23b-5f1cc14a54ec-scripts\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.665685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-etc-ovs\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.665761 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86467dc0-186a-407d-b23b-5f1cc14a54ec-combined-ca-bundle\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.666087 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-log-ovn\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.666179 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-run\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.666423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-log\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.666492 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvsrq\" (UniqueName: \"kubernetes.io/projected/86467dc0-186a-407d-b23b-5f1cc14a54ec-kube-api-access-hvsrq\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.666519 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9874749a-2839-4a08-bf7a-8e99d3c745a5-scripts\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.666557 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-run\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768073 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g44hw\" (UniqueName: \"kubernetes.io/projected/9874749a-2839-4a08-bf7a-8e99d3c745a5-kube-api-access-g44hw\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768443 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/86467dc0-186a-407d-b23b-5f1cc14a54ec-ovn-controller-tls-certs\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-lib\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768499 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-run-ovn\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768518 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86467dc0-186a-407d-b23b-5f1cc14a54ec-scripts\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768570 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-etc-ovs\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768603 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86467dc0-186a-407d-b23b-5f1cc14a54ec-combined-ca-bundle\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768663 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-log-ovn\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768737 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-run\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768838 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-log\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvsrq\" (UniqueName: \"kubernetes.io/projected/86467dc0-186a-407d-b23b-5f1cc14a54ec-kube-api-access-hvsrq\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.768963 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9874749a-2839-4a08-bf7a-8e99d3c745a5-scripts\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.769002 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-run\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.769214 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-log-ovn\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.769324 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-run\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.769363 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-run\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.769372 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-etc-ovs\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.769400 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-lib\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.769522 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/86467dc0-186a-407d-b23b-5f1cc14a54ec-var-run-ovn\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.769683 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9874749a-2839-4a08-bf7a-8e99d3c745a5-var-log\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.771187 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9874749a-2839-4a08-bf7a-8e99d3c745a5-scripts\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.778423 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86467dc0-186a-407d-b23b-5f1cc14a54ec-combined-ca-bundle\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.778429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86467dc0-186a-407d-b23b-5f1cc14a54ec-scripts\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.787340 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/86467dc0-186a-407d-b23b-5f1cc14a54ec-ovn-controller-tls-certs\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.788674 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g44hw\" (UniqueName: \"kubernetes.io/projected/9874749a-2839-4a08-bf7a-8e99d3c745a5-kube-api-access-g44hw\") pod \"ovn-controller-ovs-kdvp8\" (UID: \"9874749a-2839-4a08-bf7a-8e99d3c745a5\") " pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.789074 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvsrq\" (UniqueName: \"kubernetes.io/projected/86467dc0-186a-407d-b23b-5f1cc14a54ec-kube-api-access-hvsrq\") pod \"ovn-controller-vk8zs\" (UID: \"86467dc0-186a-407d-b23b-5f1cc14a54ec\") " pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.898840 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:01 crc kubenswrapper[4932]: I0321 09:17:01.912178 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.732388 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.734080 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.736298 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.736431 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xsx97" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.739446 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.739875 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.750275 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.822754 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eeca80df-848b-4833-96c7-f4e57ad330f7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.822842 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.822877 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eeca80df-848b-4833-96c7-f4e57ad330f7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.822908 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.822933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrc55\" (UniqueName: \"kubernetes.io/projected/eeca80df-848b-4833-96c7-f4e57ad330f7-kube-api-access-mrc55\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.823023 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.823074 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeca80df-848b-4833-96c7-f4e57ad330f7-config\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.823094 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.924916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.924978 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeca80df-848b-4833-96c7-f4e57ad330f7-config\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.925001 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.925032 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eeca80df-848b-4833-96c7-f4e57ad330f7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.925098 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.925122 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eeca80df-848b-4833-96c7-f4e57ad330f7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.925143 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.925159 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrc55\" (UniqueName: \"kubernetes.io/projected/eeca80df-848b-4833-96c7-f4e57ad330f7-kube-api-access-mrc55\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.926011 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeca80df-848b-4833-96c7-f4e57ad330f7-config\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.926075 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eeca80df-848b-4833-96c7-f4e57ad330f7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.926188 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.926444 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eeca80df-848b-4833-96c7-f4e57ad330f7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.932021 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.932024 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.936091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeca80df-848b-4833-96c7-f4e57ad330f7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.943329 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrc55\" (UniqueName: \"kubernetes.io/projected/eeca80df-848b-4833-96c7-f4e57ad330f7-kube-api-access-mrc55\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:04.946476 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"eeca80df-848b-4833-96c7-f4e57ad330f7\") " pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:06 crc kubenswrapper[4932]: I0321 09:17:05.063140 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:16 crc kubenswrapper[4932]: E0321 09:17:16.405766 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Mar 21 09:17:16 crc kubenswrapper[4932]: E0321 09:17:16.406189 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Mar 21 09:17:16 crc kubenswrapper[4932]: E0321 09:17:16.406415 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.159:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5hts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(debe0e76-6d3a-402f-af21-a3ba7ceb5a24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:17:16 crc kubenswrapper[4932]: E0321 09:17:16.407635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="debe0e76-6d3a-402f-af21-a3ba7ceb5a24" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.130723 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.130804 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.130950 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kncwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6cf7b9b6b9-f5lfz_openstack(e83cf6b3-c87b-4e81-aff6-4ec5151c5692): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.131334 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.131425 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.132073 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77hb9hddhdfhf5h5cch698h578h5f8h675h5c5hdch97h5bch59bh5b6h55h5bch556hb5h599h8dhc8h667h59ch659h578hcfh5c7h9dh645h554q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5t76t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5fdbdbc8cc-z8kx2_openstack(5023c0a6-e1c9-424f-b72b-b3da9afd59ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.132678 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" podUID="e83cf6b3-c87b-4e81-aff6-4ec5151c5692" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.133589 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" podUID="5023c0a6-e1c9-424f-b72b-b3da9afd59ce" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.170069 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.170492 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.170607 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bh2c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-766cdc564f-k86fp_openstack(668a51ac-eb24-4148-8416-2c543cc983aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.172038 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-766cdc564f-k86fp" podUID="668a51ac-eb24-4148-8416-2c543cc983aa" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.177607 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.177665 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.177782 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4d76j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5bc4895fdf-tlpfr_openstack(8aa503b9-d693-4732-9c87-86ac0692fc90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.179135 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" podUID="8aa503b9-d693-4732-9c87-86ac0692fc90" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.244804 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.244856 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.244961 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhs5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f48d6b889-khmhg_openstack(a41cc7c9-0b0e-4f5d-823d-57868a845006): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:17:17 crc kubenswrapper[4932]: E0321 09:17:17.246283 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" podUID="a41cc7c9-0b0e-4f5d-823d-57868a845006" Mar 21 09:17:17 crc kubenswrapper[4932]: I0321 09:17:17.538395 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:17:17 crc kubenswrapper[4932]: I0321 09:17:17.788116 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 21 09:17:17 crc kubenswrapper[4932]: I0321 09:17:17.811319 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 21 09:17:17 crc kubenswrapper[4932]: I0321 09:17:17.822114 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:17:17 crc kubenswrapper[4932]: I0321 09:17:17.940984 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vk8zs"] Mar 21 09:17:18 crc kubenswrapper[4932]: W0321 09:17:18.039622 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec155900_6777_4362_8c9c_ea98a8e245a8.slice/crio-ad11e0063b0cbffb4e02a9bbd237b5df2d971ebeb3471998749a75a995054f16 WatchSource:0}: Error finding container ad11e0063b0cbffb4e02a9bbd237b5df2d971ebeb3471998749a75a995054f16: Status 404 returned error can't find the container with id ad11e0063b0cbffb4e02a9bbd237b5df2d971ebeb3471998749a75a995054f16 Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.053553 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerStarted","Data":"ad11e0063b0cbffb4e02a9bbd237b5df2d971ebeb3471998749a75a995054f16"} Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.055550 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9b1d914e-2d0b-4aa6-a863-04496a5acb61","Type":"ContainerStarted","Data":"84dd48c3189d5c2e29f3d293a5a82d442d161284d27086fca3a8699011399abb"} Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.057462 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba","Type":"ContainerStarted","Data":"f3215717ef20a3726091d6bfc950ede2070ae434b1e8d95552e5753328208967"} Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.059799 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"616e853e-7b43-435c-b3fd-beaaa89779ff","Type":"ContainerStarted","Data":"05113e8928d4b7f39d1444dfd14b9a1a34ceed2d15f6f0b04ff4589ee892bbbb"} Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.069994 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"765d61b5-f144-4784-8c7d-ac497a6b6cba","Type":"ContainerStarted","Data":"ece26c60f4bad48cbeaa916023b6fb184e96adba19ab33e128a42f1052097709"} Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.071529 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 21 09:17:18 crc kubenswrapper[4932]: E0321 09:17:18.074596 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" podUID="5023c0a6-e1c9-424f-b72b-b3da9afd59ce" Mar 21 09:17:18 crc kubenswrapper[4932]: E0321 09:17:18.074711 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-766cdc564f-k86fp" podUID="668a51ac-eb24-4148-8416-2c543cc983aa" Mar 21 09:17:18 crc kubenswrapper[4932]: W0321 09:17:18.240573 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeca80df_848b_4833_96c7_f4e57ad330f7.slice/crio-66f7c0a742b1fde68ed361326eb856c28c85971a8e1d9700fe403f3a83654560 WatchSource:0}: Error finding container 66f7c0a742b1fde68ed361326eb856c28c85971a8e1d9700fe403f3a83654560: Status 404 returned error can't find the container with id 66f7c0a742b1fde68ed361326eb856c28c85971a8e1d9700fe403f3a83654560 Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.829541 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.836759 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.856388 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.871218 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-config\") pod \"a41cc7c9-0b0e-4f5d-823d-57868a845006\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.871327 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kncwz\" (UniqueName: \"kubernetes.io/projected/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-kube-api-access-kncwz\") pod \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\" (UID: \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\") " Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.871379 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-config\") pod \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\" (UID: \"e83cf6b3-c87b-4e81-aff6-4ec5151c5692\") " Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.871464 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-dns-svc\") pod \"a41cc7c9-0b0e-4f5d-823d-57868a845006\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.871654 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhs5h\" (UniqueName: \"kubernetes.io/projected/a41cc7c9-0b0e-4f5d-823d-57868a845006-kube-api-access-jhs5h\") pod \"a41cc7c9-0b0e-4f5d-823d-57868a845006\" (UID: \"a41cc7c9-0b0e-4f5d-823d-57868a845006\") " Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.872272 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-config" (OuterVolumeSpecName: "config") pod "a41cc7c9-0b0e-4f5d-823d-57868a845006" (UID: "a41cc7c9-0b0e-4f5d-823d-57868a845006"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.872423 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a41cc7c9-0b0e-4f5d-823d-57868a845006" (UID: "a41cc7c9-0b0e-4f5d-823d-57868a845006"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.872986 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-config" (OuterVolumeSpecName: "config") pod "e83cf6b3-c87b-4e81-aff6-4ec5151c5692" (UID: "e83cf6b3-c87b-4e81-aff6-4ec5151c5692"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.879503 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-kube-api-access-kncwz" (OuterVolumeSpecName: "kube-api-access-kncwz") pod "e83cf6b3-c87b-4e81-aff6-4ec5151c5692" (UID: "e83cf6b3-c87b-4e81-aff6-4ec5151c5692"). InnerVolumeSpecName "kube-api-access-kncwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.880571 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41cc7c9-0b0e-4f5d-823d-57868a845006-kube-api-access-jhs5h" (OuterVolumeSpecName: "kube-api-access-jhs5h") pod "a41cc7c9-0b0e-4f5d-823d-57868a845006" (UID: "a41cc7c9-0b0e-4f5d-823d-57868a845006"). InnerVolumeSpecName "kube-api-access-jhs5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.957314 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kdvp8"] Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.973391 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-dns-svc\") pod \"8aa503b9-d693-4732-9c87-86ac0692fc90\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.973570 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-config\") pod \"8aa503b9-d693-4732-9c87-86ac0692fc90\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.973707 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d76j\" (UniqueName: \"kubernetes.io/projected/8aa503b9-d693-4732-9c87-86ac0692fc90-kube-api-access-4d76j\") pod \"8aa503b9-d693-4732-9c87-86ac0692fc90\" (UID: \"8aa503b9-d693-4732-9c87-86ac0692fc90\") " Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.973912 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8aa503b9-d693-4732-9c87-86ac0692fc90" (UID: "8aa503b9-d693-4732-9c87-86ac0692fc90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.974918 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.974941 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.974956 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kncwz\" (UniqueName: \"kubernetes.io/projected/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-kube-api-access-kncwz\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.974971 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cf6b3-c87b-4e81-aff6-4ec5151c5692-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.974982 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a41cc7c9-0b0e-4f5d-823d-57868a845006-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.974994 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhs5h\" (UniqueName: \"kubernetes.io/projected/a41cc7c9-0b0e-4f5d-823d-57868a845006-kube-api-access-jhs5h\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.975120 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-config" (OuterVolumeSpecName: "config") pod "8aa503b9-d693-4732-9c87-86ac0692fc90" (UID: "8aa503b9-d693-4732-9c87-86ac0692fc90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:18 crc kubenswrapper[4932]: I0321 09:17:18.978046 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa503b9-d693-4732-9c87-86ac0692fc90-kube-api-access-4d76j" (OuterVolumeSpecName: "kube-api-access-4d76j") pod "8aa503b9-d693-4732-9c87-86ac0692fc90" (UID: "8aa503b9-d693-4732-9c87-86ac0692fc90"). InnerVolumeSpecName "kube-api-access-4d76j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.076198 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aa503b9-d693-4732-9c87-86ac0692fc90-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.076267 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d76j\" (UniqueName: \"kubernetes.io/projected/8aa503b9-d693-4732-9c87-86ac0692fc90-kube-api-access-4d76j\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.077753 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.079858 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.079919 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f48d6b889-khmhg" event={"ID":"a41cc7c9-0b0e-4f5d-823d-57868a845006","Type":"ContainerDied","Data":"9b34ca8855423e0c7bec6ec1b09d9af3cf49461b3d97e99605b57713fbbf3d21"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.082763 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52bf7d16-ddac-464e-aca0-7756f5a9f696","Type":"ContainerStarted","Data":"0f8f2d414c512b789fa5243857d9b5371289822c973afa9643e18f0117903294"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.090438 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vk8zs" event={"ID":"86467dc0-186a-407d-b23b-5f1cc14a54ec","Type":"ContainerStarted","Data":"c93c1e64b3e21b0db31678925304cf302c15446ef507c8074c9ca6bfe07973b9"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.092615 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eeca80df-848b-4833-96c7-f4e57ad330f7","Type":"ContainerStarted","Data":"66f7c0a742b1fde68ed361326eb856c28c85971a8e1d9700fe403f3a83654560"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.094917 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"07d3d99e-014e-4924-827a-f3e2f87774c6","Type":"ContainerStarted","Data":"758f8c37174f2caf5319c2f9f0bb18cfba6d9aa67126a7700441d9d09e670ddb"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.097972 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" event={"ID":"8aa503b9-d693-4732-9c87-86ac0692fc90","Type":"ContainerDied","Data":"10ae35fa6ff69fe09ebe8b47294d64044dabfe6146ae359224f7b3026d406700"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.098050 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bc4895fdf-tlpfr" Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.101398 4932 generic.go:334] "Generic (PLEG): container finished" podID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerID="51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a" exitCode=0 Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.101469 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqj9" event={"ID":"2886ddaa-5c25-4044-8cbb-8a9249c32ee4","Type":"ContainerDied","Data":"51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.104393 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"debe0e76-6d3a-402f-af21-a3ba7ceb5a24","Type":"ContainerStarted","Data":"58f7bb97afe0e71eb26cea8d832a78b830bc6b66828c9fa1109d9223ffd96114"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.105762 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba","Type":"ContainerStarted","Data":"8548e5f5d741178086a5f625f2826f00cbfd4d07b22f97dda04e368287546580"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.118380 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" event={"ID":"e83cf6b3-c87b-4e81-aff6-4ec5151c5692","Type":"ContainerDied","Data":"b365e0757cd5853dcaa04ab7be66beabad26846a71937621847a6c197e276d60"} Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.118378 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz" Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.234625 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-khmhg"] Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.247341 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f48d6b889-khmhg"] Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.315921 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz"] Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.330194 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cf7b9b6b9-f5lfz"] Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.353621 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bc4895fdf-tlpfr"] Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.360651 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bc4895fdf-tlpfr"] Mar 21 09:17:19 crc kubenswrapper[4932]: W0321 09:17:19.715711 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff5058f9_6f1d_412e_a4c1_c12b67a26b41.slice/crio-058f4fa9b0ebbe6499361f0c39edb1915a60bb8a1e3bed464db1b25009c25c7e WatchSource:0}: Error finding container 058f4fa9b0ebbe6499361f0c39edb1915a60bb8a1e3bed464db1b25009c25c7e: Status 404 returned error can't find the container with id 058f4fa9b0ebbe6499361f0c39edb1915a60bb8a1e3bed464db1b25009c25c7e Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.732958 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aa503b9-d693-4732-9c87-86ac0692fc90" path="/var/lib/kubelet/pods/8aa503b9-d693-4732-9c87-86ac0692fc90/volumes" Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.734095 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41cc7c9-0b0e-4f5d-823d-57868a845006" path="/var/lib/kubelet/pods/a41cc7c9-0b0e-4f5d-823d-57868a845006/volumes" Mar 21 09:17:19 crc kubenswrapper[4932]: I0321 09:17:19.735309 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e83cf6b3-c87b-4e81-aff6-4ec5151c5692" path="/var/lib/kubelet/pods/e83cf6b3-c87b-4e81-aff6-4ec5151c5692/volumes" Mar 21 09:17:20 crc kubenswrapper[4932]: I0321 09:17:20.127058 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kdvp8" event={"ID":"9874749a-2839-4a08-bf7a-8e99d3c745a5","Type":"ContainerStarted","Data":"df434601667ab8b884957d3a20cbf186d065cb242263d6d3a5591ac11b2b3fb3"} Mar 21 09:17:20 crc kubenswrapper[4932]: I0321 09:17:20.128981 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ff5058f9-6f1d-412e-a4c1-c12b67a26b41","Type":"ContainerStarted","Data":"058f4fa9b0ebbe6499361f0c39edb1915a60bb8a1e3bed464db1b25009c25c7e"} Mar 21 09:17:22 crc kubenswrapper[4932]: E0321 09:17:22.787628 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616e853e_7b43_435c_b3fd_beaaa89779ff.slice/crio-05113e8928d4b7f39d1444dfd14b9a1a34ceed2d15f6f0b04ff4589ee892bbbb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod616e853e_7b43_435c_b3fd_beaaa89779ff.slice/crio-conmon-05113e8928d4b7f39d1444dfd14b9a1a34ceed2d15f6f0b04ff4589ee892bbbb.scope\": RecentStats: unable to find data in memory cache]" Mar 21 09:17:23 crc kubenswrapper[4932]: I0321 09:17:23.158241 4932 generic.go:334] "Generic (PLEG): container finished" podID="616e853e-7b43-435c-b3fd-beaaa89779ff" containerID="05113e8928d4b7f39d1444dfd14b9a1a34ceed2d15f6f0b04ff4589ee892bbbb" exitCode=0 Mar 21 09:17:23 crc kubenswrapper[4932]: I0321 09:17:23.158286 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"616e853e-7b43-435c-b3fd-beaaa89779ff","Type":"ContainerDied","Data":"05113e8928d4b7f39d1444dfd14b9a1a34ceed2d15f6f0b04ff4589ee892bbbb"} Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.178160 4932 generic.go:334] "Generic (PLEG): container finished" podID="bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba" containerID="8548e5f5d741178086a5f625f2826f00cbfd4d07b22f97dda04e368287546580" exitCode=0 Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.178512 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba","Type":"ContainerDied","Data":"8548e5f5d741178086a5f625f2826f00cbfd4d07b22f97dda04e368287546580"} Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.961627 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-w54m5"] Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.963361 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.965738 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.978023 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfdfb\" (UniqueName: \"kubernetes.io/projected/5faa8451-6af5-4eea-ba81-732ddabb83b3-kube-api-access-lfdfb\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.978076 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5faa8451-6af5-4eea-ba81-732ddabb83b3-ovn-rundir\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.978249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5faa8451-6af5-4eea-ba81-732ddabb83b3-combined-ca-bundle\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.978305 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5faa8451-6af5-4eea-ba81-732ddabb83b3-config\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.978446 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5faa8451-6af5-4eea-ba81-732ddabb83b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.978519 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5faa8451-6af5-4eea-ba81-732ddabb83b3-ovs-rundir\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:24 crc kubenswrapper[4932]: I0321 09:17:24.984451 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w54m5"] Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.079469 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5faa8451-6af5-4eea-ba81-732ddabb83b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.079550 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5faa8451-6af5-4eea-ba81-732ddabb83b3-ovs-rundir\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.079591 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdfb\" (UniqueName: \"kubernetes.io/projected/5faa8451-6af5-4eea-ba81-732ddabb83b3-kube-api-access-lfdfb\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.079622 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5faa8451-6af5-4eea-ba81-732ddabb83b3-ovn-rundir\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.079646 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5faa8451-6af5-4eea-ba81-732ddabb83b3-combined-ca-bundle\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.079693 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5faa8451-6af5-4eea-ba81-732ddabb83b3-config\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.080411 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5faa8451-6af5-4eea-ba81-732ddabb83b3-config\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.080638 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5faa8451-6af5-4eea-ba81-732ddabb83b3-ovs-rundir\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.080689 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5faa8451-6af5-4eea-ba81-732ddabb83b3-ovn-rundir\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.092198 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5faa8451-6af5-4eea-ba81-732ddabb83b3-combined-ca-bundle\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.103401 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5faa8451-6af5-4eea-ba81-732ddabb83b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.105750 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766cdc564f-k86fp"] Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.112635 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfdfb\" (UniqueName: \"kubernetes.io/projected/5faa8451-6af5-4eea-ba81-732ddabb83b3-kube-api-access-lfdfb\") pod \"ovn-controller-metrics-w54m5\" (UID: \"5faa8451-6af5-4eea-ba81-732ddabb83b3\") " pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.158669 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-ml8g9"] Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.160411 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.165654 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.171335 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-ml8g9"] Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.185919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-ovsdbserver-nb\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.185988 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-dns-svc\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.186015 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td5mh\" (UniqueName: \"kubernetes.io/projected/c5e5ca63-527b-4907-9e46-e11690797d6f-kube-api-access-td5mh\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.186049 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-config\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.199159 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vk8zs" event={"ID":"86467dc0-186a-407d-b23b-5f1cc14a54ec","Type":"ContainerStarted","Data":"ab096b9ee556b8f445946f16f4600ab433c53d0722e23c062407379e3e6df5a9"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.204671 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vk8zs" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.225413 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"616e853e-7b43-435c-b3fd-beaaa89779ff","Type":"ContainerStarted","Data":"3cfa8239703e06929e9dd1f5aca9b1cea5d5c5257ee22bf2d64f39da2510d51a"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.241283 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vk8zs" podStartSLOduration=19.651824301 podStartE2EDuration="24.241257027s" podCreationTimestamp="2026-03-21 09:17:01 +0000 UTC" firstStartedPulling="2026-03-21 09:17:18.141576274 +0000 UTC m=+1141.736774543" lastFinishedPulling="2026-03-21 09:17:22.731009 +0000 UTC m=+1146.326207269" observedRunningTime="2026-03-21 09:17:25.233498706 +0000 UTC m=+1148.828696995" watchObservedRunningTime="2026-03-21 09:17:25.241257027 +0000 UTC m=+1148.836455286" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.252284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"765d61b5-f144-4784-8c7d-ac497a6b6cba","Type":"ContainerStarted","Data":"535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.253191 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.273209 4932 generic.go:334] "Generic (PLEG): container finished" podID="9874749a-2839-4a08-bf7a-8e99d3c745a5" containerID="e87eaab408f06007a83e46422867bb2e2fdfe3863dac98cc55569b5b78fcf8f9" exitCode=0 Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.273336 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kdvp8" event={"ID":"9874749a-2839-4a08-bf7a-8e99d3c745a5","Type":"ContainerDied","Data":"e87eaab408f06007a83e46422867bb2e2fdfe3863dac98cc55569b5b78fcf8f9"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.275670 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.961380446 podStartE2EDuration="33.275650669s" podCreationTimestamp="2026-03-21 09:16:52 +0000 UTC" firstStartedPulling="2026-03-21 09:16:54.743983374 +0000 UTC m=+1118.339181643" lastFinishedPulling="2026-03-21 09:17:17.058253597 +0000 UTC m=+1140.653451866" observedRunningTime="2026-03-21 09:17:25.27308564 +0000 UTC m=+1148.868283929" watchObservedRunningTime="2026-03-21 09:17:25.275650669 +0000 UTC m=+1148.870848938" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.280294 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w54m5" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.289105 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-dns-svc\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.289395 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td5mh\" (UniqueName: \"kubernetes.io/projected/c5e5ca63-527b-4907-9e46-e11690797d6f-kube-api-access-td5mh\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.289508 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-config\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.289513 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9b1d914e-2d0b-4aa6-a863-04496a5acb61","Type":"ContainerStarted","Data":"8a09d9e844041575ff8e5705ee5a119913f3283c551f94383cb46c9bdef44afb"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.289726 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-ovsdbserver-nb\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.290196 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.290673 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-config\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.290961 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-dns-svc\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.292603 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-ovsdbserver-nb\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.306069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.097574115 podStartE2EDuration="28.306044919s" podCreationTimestamp="2026-03-21 09:16:57 +0000 UTC" firstStartedPulling="2026-03-21 09:17:17.551026119 +0000 UTC m=+1141.146224398" lastFinishedPulling="2026-03-21 09:17:23.759496933 +0000 UTC m=+1147.354695202" observedRunningTime="2026-03-21 09:17:25.295564455 +0000 UTC m=+1148.890762744" watchObservedRunningTime="2026-03-21 09:17:25.306044919 +0000 UTC m=+1148.901243208" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.310666 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqj9" event={"ID":"2886ddaa-5c25-4044-8cbb-8a9249c32ee4","Type":"ContainerStarted","Data":"0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.316407 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td5mh\" (UniqueName: \"kubernetes.io/projected/c5e5ca63-527b-4907-9e46-e11690797d6f-kube-api-access-td5mh\") pod \"dnsmasq-dns-58bdb65675-ml8g9\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.320074 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eeca80df-848b-4833-96c7-f4e57ad330f7","Type":"ContainerStarted","Data":"5cb50591b224c749cdf34ef360a0a61e65dcaab5bb0893333000adea79d6de23"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.323234 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba","Type":"ContainerStarted","Data":"822e0b06063254503c908a315a6f99a5f63b40c9bd65ba6e53d93ea0e2874ec9"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.339544 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ff5058f9-6f1d-412e-a4c1-c12b67a26b41","Type":"ContainerStarted","Data":"92a3953bbd4d52472831dddf249bff6899965094d8e4d3774304c9a69d3923f4"} Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.365050 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=26.554482491999998 podStartE2EDuration="30.365034352s" podCreationTimestamp="2026-03-21 09:16:55 +0000 UTC" firstStartedPulling="2026-03-21 09:17:17.938170146 +0000 UTC m=+1141.533368415" lastFinishedPulling="2026-03-21 09:17:21.748722006 +0000 UTC m=+1145.343920275" observedRunningTime="2026-03-21 09:17:25.358805241 +0000 UTC m=+1148.954003530" watchObservedRunningTime="2026-03-21 09:17:25.365034352 +0000 UTC m=+1148.960232621" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.391198 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=32.391176181 podStartE2EDuration="32.391176181s" podCreationTimestamp="2026-03-21 09:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:17:25.383085401 +0000 UTC m=+1148.978283680" watchObservedRunningTime="2026-03-21 09:17:25.391176181 +0000 UTC m=+1148.986374460" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.428708 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbqj9" podStartSLOduration=5.473774235 podStartE2EDuration="34.428690031s" podCreationTimestamp="2026-03-21 09:16:51 +0000 UTC" firstStartedPulling="2026-03-21 09:16:53.764219528 +0000 UTC m=+1117.359417797" lastFinishedPulling="2026-03-21 09:17:22.719135324 +0000 UTC m=+1146.314333593" observedRunningTime="2026-03-21 09:17:25.415977067 +0000 UTC m=+1149.011175346" watchObservedRunningTime="2026-03-21 09:17:25.428690031 +0000 UTC m=+1149.023888300" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.463912 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2"] Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.520624 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-dvmxq"] Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.522954 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.528506 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.529022 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.558675 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-dvmxq"] Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.645275 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.645365 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.674789 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.697719 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-dns-svc\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.697803 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-config\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.697835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.697917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbqg\" (UniqueName: \"kubernetes.io/projected/67d7398f-51b2-4776-bd9a-936ba72c2d6e-kube-api-access-nmbqg\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.697945 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.799436 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh2c4\" (UniqueName: \"kubernetes.io/projected/668a51ac-eb24-4148-8416-2c543cc983aa-kube-api-access-bh2c4\") pod \"668a51ac-eb24-4148-8416-2c543cc983aa\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.799497 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-config\") pod \"668a51ac-eb24-4148-8416-2c543cc983aa\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.799697 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-dns-svc\") pod \"668a51ac-eb24-4148-8416-2c543cc983aa\" (UID: \"668a51ac-eb24-4148-8416-2c543cc983aa\") " Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.799879 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmbqg\" (UniqueName: \"kubernetes.io/projected/67d7398f-51b2-4776-bd9a-936ba72c2d6e-kube-api-access-nmbqg\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.799911 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.799960 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-dns-svc\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.800000 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-config\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.800030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.801881 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-config" (OuterVolumeSpecName: "config") pod "668a51ac-eb24-4148-8416-2c543cc983aa" (UID: "668a51ac-eb24-4148-8416-2c543cc983aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.802694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.804017 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-config\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.804034 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "668a51ac-eb24-4148-8416-2c543cc983aa" (UID: "668a51ac-eb24-4148-8416-2c543cc983aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.804883 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-dns-svc\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.809210 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.824588 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmbqg\" (UniqueName: \"kubernetes.io/projected/67d7398f-51b2-4776-bd9a-936ba72c2d6e-kube-api-access-nmbqg\") pod \"dnsmasq-dns-6f8ff78869-dvmxq\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.828896 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668a51ac-eb24-4148-8416-2c543cc983aa-kube-api-access-bh2c4" (OuterVolumeSpecName: "kube-api-access-bh2c4") pod "668a51ac-eb24-4148-8416-2c543cc983aa" (UID: "668a51ac-eb24-4148-8416-2c543cc983aa"). InnerVolumeSpecName "kube-api-access-bh2c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.868591 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.903611 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.903668 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh2c4\" (UniqueName: \"kubernetes.io/projected/668a51ac-eb24-4148-8416-2c543cc983aa-kube-api-access-bh2c4\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.903680 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668a51ac-eb24-4148-8416-2c543cc983aa-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:25 crc kubenswrapper[4932]: I0321 09:17:25.912672 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w54m5"] Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.056470 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.169030 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-ml8g9"] Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.211812 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t76t\" (UniqueName: \"kubernetes.io/projected/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-kube-api-access-5t76t\") pod \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.211926 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-dns-svc\") pod \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.212250 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-config\") pod \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\" (UID: \"5023c0a6-e1c9-424f-b72b-b3da9afd59ce\") " Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.212714 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5023c0a6-e1c9-424f-b72b-b3da9afd59ce" (UID: "5023c0a6-e1c9-424f-b72b-b3da9afd59ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.212802 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-config" (OuterVolumeSpecName: "config") pod "5023c0a6-e1c9-424f-b72b-b3da9afd59ce" (UID: "5023c0a6-e1c9-424f-b72b-b3da9afd59ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.228757 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-kube-api-access-5t76t" (OuterVolumeSpecName: "kube-api-access-5t76t") pod "5023c0a6-e1c9-424f-b72b-b3da9afd59ce" (UID: "5023c0a6-e1c9-424f-b72b-b3da9afd59ce"). InnerVolumeSpecName "kube-api-access-5t76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.314022 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.314332 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.314363 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t76t\" (UniqueName: \"kubernetes.io/projected/5023c0a6-e1c9-424f-b72b-b3da9afd59ce-kube-api-access-5t76t\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.348516 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kdvp8" event={"ID":"9874749a-2839-4a08-bf7a-8e99d3c745a5","Type":"ContainerStarted","Data":"e0dc57bb72b0fe39418b116b467c08f410c3651a50cea7aeb68ac42902ef737b"} Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.350218 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-766cdc564f-k86fp" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.350235 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-766cdc564f-k86fp" event={"ID":"668a51ac-eb24-4148-8416-2c543cc983aa","Type":"ContainerDied","Data":"87ced9610dc691595e52c355beeadbadfd71b41999481170014f30a1bec3c9e9"} Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.352314 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" event={"ID":"5023c0a6-e1c9-424f-b72b-b3da9afd59ce","Type":"ContainerDied","Data":"0ea4ec4af29e02eb7720f8946261f215287a4ff0ec893892bdc832d2c5136da0"} Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.352402 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2" Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.355307 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" event={"ID":"c5e5ca63-527b-4907-9e46-e11690797d6f","Type":"ContainerStarted","Data":"9097155ddeabfa564fe5926d052931cc968a8fd8a0151356a52be4c728622c83"} Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.359055 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w54m5" event={"ID":"5faa8451-6af5-4eea-ba81-732ddabb83b3","Type":"ContainerStarted","Data":"8c8c8d57f93854e6ca6a1f7c8be3ceb56d6c437470788046b27ab9ef0e1a23dd"} Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.418316 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-766cdc564f-k86fp"] Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.433980 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-766cdc564f-k86fp"] Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.446892 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2"] Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.452817 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbdbc8cc-z8kx2"] Mar 21 09:17:26 crc kubenswrapper[4932]: I0321 09:17:26.459764 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-dvmxq"] Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.366892 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kdvp8" event={"ID":"9874749a-2839-4a08-bf7a-8e99d3c745a5","Type":"ContainerStarted","Data":"2dbd2e5e994d3ee819bfcef16ba2971b61fc84351c3b1a363df3b189719960a2"} Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.368261 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.368293 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.370235 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerStarted","Data":"b3f7a9ebf09c962bfc77cd713a69c5b6eab7ea38596e4511c70ad60d57541524"} Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.372133 4932 generic.go:334] "Generic (PLEG): container finished" podID="c5e5ca63-527b-4907-9e46-e11690797d6f" containerID="5546b55ae8ccdf4050a92998ec00e225cdf096a62fd7a15092da87cad675c5a7" exitCode=0 Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.372177 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" event={"ID":"c5e5ca63-527b-4907-9e46-e11690797d6f","Type":"ContainerDied","Data":"5546b55ae8ccdf4050a92998ec00e225cdf096a62fd7a15092da87cad675c5a7"} Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.374507 4932 generic.go:334] "Generic (PLEG): container finished" podID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" containerID="4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15" exitCode=0 Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.375525 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" event={"ID":"67d7398f-51b2-4776-bd9a-936ba72c2d6e","Type":"ContainerDied","Data":"4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15"} Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.375552 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" event={"ID":"67d7398f-51b2-4776-bd9a-936ba72c2d6e","Type":"ContainerStarted","Data":"1d1b9d49dd2c53012b029be9c6eace4e5fc96df865f7061069f7227dde0ae5cc"} Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.400580 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kdvp8" podStartSLOduration=23.415429821 podStartE2EDuration="26.400556575s" podCreationTimestamp="2026-03-21 09:17:01 +0000 UTC" firstStartedPulling="2026-03-21 09:17:19.734497564 +0000 UTC m=+1143.329695833" lastFinishedPulling="2026-03-21 09:17:22.719624318 +0000 UTC m=+1146.314822587" observedRunningTime="2026-03-21 09:17:27.392119314 +0000 UTC m=+1150.987317603" watchObservedRunningTime="2026-03-21 09:17:27.400556575 +0000 UTC m=+1150.995754844" Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.718274 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5023c0a6-e1c9-424f-b72b-b3da9afd59ce" path="/var/lib/kubelet/pods/5023c0a6-e1c9-424f-b72b-b3da9afd59ce/volumes" Mar 21 09:17:27 crc kubenswrapper[4932]: I0321 09:17:27.718814 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668a51ac-eb24-4148-8416-2c543cc983aa" path="/var/lib/kubelet/pods/668a51ac-eb24-4148-8416-2c543cc983aa/volumes" Mar 21 09:17:28 crc kubenswrapper[4932]: E0321 09:17:28.060246 4932 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.20:53976->38.102.83.20:39993: write tcp 38.102.83.20:53976->38.102.83.20:39993: write: broken pipe Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.398789 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" event={"ID":"c5e5ca63-527b-4907-9e46-e11690797d6f","Type":"ContainerStarted","Data":"aeb89cb1704e5221248de580a11f4bf399dc42f33c4506f75c5a0edb312d35e4"} Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.399274 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.400155 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w54m5" event={"ID":"5faa8451-6af5-4eea-ba81-732ddabb83b3","Type":"ContainerStarted","Data":"d973583cdf1057c362e049c599e51858e29a9c4c1c51f7945513c4e59ca36c04"} Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.402992 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" event={"ID":"67d7398f-51b2-4776-bd9a-936ba72c2d6e","Type":"ContainerStarted","Data":"d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c"} Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.403129 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.406376 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ff5058f9-6f1d-412e-a4c1-c12b67a26b41","Type":"ContainerStarted","Data":"cfdad1e65acb2656787647c72bf188b7232dc9ac8fa065aa087cba5073f2853a"} Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.408450 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eeca80df-848b-4833-96c7-f4e57ad330f7","Type":"ContainerStarted","Data":"9c477f7d4adc4b19dad264f38e137624ce54163a70a329d1dd707de4872f05a1"} Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.420901 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" podStartSLOduration=5.320879176 podStartE2EDuration="5.420884258s" podCreationTimestamp="2026-03-21 09:17:25 +0000 UTC" firstStartedPulling="2026-03-21 09:17:26.249546724 +0000 UTC m=+1149.844744993" lastFinishedPulling="2026-03-21 09:17:26.349551806 +0000 UTC m=+1149.944750075" observedRunningTime="2026-03-21 09:17:30.414609245 +0000 UTC m=+1154.009807524" watchObservedRunningTime="2026-03-21 09:17:30.420884258 +0000 UTC m=+1154.016082527" Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.444594 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-w54m5" podStartSLOduration=3.062923677 podStartE2EDuration="6.444571601s" podCreationTimestamp="2026-03-21 09:17:24 +0000 UTC" firstStartedPulling="2026-03-21 09:17:25.940525652 +0000 UTC m=+1149.535723921" lastFinishedPulling="2026-03-21 09:17:29.322173566 +0000 UTC m=+1152.917371845" observedRunningTime="2026-03-21 09:17:30.430480525 +0000 UTC m=+1154.025678804" watchObservedRunningTime="2026-03-21 09:17:30.444571601 +0000 UTC m=+1154.039769870" Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.470274 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.88864583 podStartE2EDuration="30.470255235s" podCreationTimestamp="2026-03-21 09:17:00 +0000 UTC" firstStartedPulling="2026-03-21 09:17:19.718358384 +0000 UTC m=+1143.313556653" lastFinishedPulling="2026-03-21 09:17:29.299967789 +0000 UTC m=+1152.895166058" observedRunningTime="2026-03-21 09:17:30.464980282 +0000 UTC m=+1154.060178551" watchObservedRunningTime="2026-03-21 09:17:30.470255235 +0000 UTC m=+1154.065453504" Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.509157 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.468986427 podStartE2EDuration="27.509135167s" podCreationTimestamp="2026-03-21 09:17:03 +0000 UTC" firstStartedPulling="2026-03-21 09:17:18.245418374 +0000 UTC m=+1141.840616643" lastFinishedPulling="2026-03-21 09:17:29.285567114 +0000 UTC m=+1152.880765383" observedRunningTime="2026-03-21 09:17:30.484186965 +0000 UTC m=+1154.079385234" watchObservedRunningTime="2026-03-21 09:17:30.509135167 +0000 UTC m=+1154.104333436" Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.521038 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" podStartSLOduration=5.521020354 podStartE2EDuration="5.521020354s" podCreationTimestamp="2026-03-21 09:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:17:30.510782337 +0000 UTC m=+1154.105980606" watchObservedRunningTime="2026-03-21 09:17:30.521020354 +0000 UTC m=+1154.116218623" Mar 21 09:17:30 crc kubenswrapper[4932]: I0321 09:17:30.743506 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 21 09:17:31 crc kubenswrapper[4932]: I0321 09:17:31.551426 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:31 crc kubenswrapper[4932]: I0321 09:17:31.551947 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:31 crc kubenswrapper[4932]: I0321 09:17:31.593313 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:31 crc kubenswrapper[4932]: I0321 09:17:31.814057 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 21 09:17:31 crc kubenswrapper[4932]: I0321 09:17:31.905301 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:17:31 crc kubenswrapper[4932]: I0321 09:17:31.905378 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:17:31 crc kubenswrapper[4932]: I0321 09:17:31.935097 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 21 09:17:31 crc kubenswrapper[4932]: I0321 09:17:31.951071 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.064340 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.109242 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.424629 4932 generic.go:334] "Generic (PLEG): container finished" podID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerID="b3f7a9ebf09c962bfc77cd713a69c5b6eab7ea38596e4511c70ad60d57541524" exitCode=0 Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.424740 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerDied","Data":"b3f7a9ebf09c962bfc77cd713a69c5b6eab7ea38596e4511c70ad60d57541524"} Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.425033 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.470131 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.474090 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.483462 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.587541 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbqj9"] Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.778107 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.779622 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.783042 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.783453 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.786250 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-w4ntt" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.786615 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.797089 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.951340 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.951705 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98ll7\" (UniqueName: \"kubernetes.io/projected/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-kube-api-access-98ll7\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.951862 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-config\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.951940 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.952169 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-scripts\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.952261 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:32 crc kubenswrapper[4932]: I0321 09:17:32.952460 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.054103 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.054178 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98ll7\" (UniqueName: \"kubernetes.io/projected/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-kube-api-access-98ll7\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.054240 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-config\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.054282 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.054375 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-scripts\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.054411 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.054486 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.055231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.055688 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-config\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.056191 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-scripts\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.061489 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.061695 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.071809 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.072424 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98ll7\" (UniqueName: \"kubernetes.io/projected/04747ad4-d988-4d6d-8a2e-ab0e28e2cda0-kube-api-access-98ll7\") pod \"ovn-northd-0\" (UID: \"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0\") " pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.095681 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 21 09:17:33 crc kubenswrapper[4932]: I0321 09:17:33.538938 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 21 09:17:33 crc kubenswrapper[4932]: W0321 09:17:33.544168 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04747ad4_d988_4d6d_8a2e_ab0e28e2cda0.slice/crio-b3fe4d5f661b8699251d352bf3ee3fc4a9384d39766ce4cd07a3dc7fabd82537 WatchSource:0}: Error finding container b3fe4d5f661b8699251d352bf3ee3fc4a9384d39766ce4cd07a3dc7fabd82537: Status 404 returned error can't find the container with id b3fe4d5f661b8699251d352bf3ee3fc4a9384d39766ce4cd07a3dc7fabd82537 Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.096475 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6sjjg"] Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.098070 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.100963 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.108050 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6sjjg"] Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.218115 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.219213 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.283369 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-operator-scripts\") pod \"root-account-create-update-6sjjg\" (UID: \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\") " pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.283417 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpq98\" (UniqueName: \"kubernetes.io/projected/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-kube-api-access-kpq98\") pod \"root-account-create-update-6sjjg\" (UID: \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\") " pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.370842 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.385292 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-operator-scripts\") pod \"root-account-create-update-6sjjg\" (UID: \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\") " pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.386394 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-operator-scripts\") pod \"root-account-create-update-6sjjg\" (UID: \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\") " pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.386548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpq98\" (UniqueName: \"kubernetes.io/projected/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-kube-api-access-kpq98\") pod \"root-account-create-update-6sjjg\" (UID: \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\") " pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.415409 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpq98\" (UniqueName: \"kubernetes.io/projected/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-kube-api-access-kpq98\") pod \"root-account-create-update-6sjjg\" (UID: \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\") " pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.451851 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.453171 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0","Type":"ContainerStarted","Data":"05849c0e092f72337dea2370fb65c9edd80e6d79fc1cc74f4b2c4fbb37ab1f02"} Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.453209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0","Type":"ContainerStarted","Data":"b3fe4d5f661b8699251d352bf3ee3fc4a9384d39766ce4cd07a3dc7fabd82537"} Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.453277 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbqj9" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerName="registry-server" containerID="cri-o://0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6" gracePeriod=2 Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.488230 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.064473718 podStartE2EDuration="2.488207706s" podCreationTimestamp="2026-03-21 09:17:32 +0000 UTC" firstStartedPulling="2026-03-21 09:17:33.546189267 +0000 UTC m=+1157.141387536" lastFinishedPulling="2026-03-21 09:17:33.969923255 +0000 UTC m=+1157.565121524" observedRunningTime="2026-03-21 09:17:34.484009646 +0000 UTC m=+1158.079207925" watchObservedRunningTime="2026-03-21 09:17:34.488207706 +0000 UTC m=+1158.083405985" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.599021 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.949158 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6sjjg"] Mar 21 09:17:34 crc kubenswrapper[4932]: I0321 09:17:34.954461 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.102171 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-utilities\") pod \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.102215 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-catalog-content\") pod \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.102277 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw72v\" (UniqueName: \"kubernetes.io/projected/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-kube-api-access-fw72v\") pod \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\" (UID: \"2886ddaa-5c25-4044-8cbb-8a9249c32ee4\") " Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.105952 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-utilities" (OuterVolumeSpecName: "utilities") pod "2886ddaa-5c25-4044-8cbb-8a9249c32ee4" (UID: "2886ddaa-5c25-4044-8cbb-8a9249c32ee4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.107774 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.108186 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-kube-api-access-fw72v" (OuterVolumeSpecName: "kube-api-access-fw72v") pod "2886ddaa-5c25-4044-8cbb-8a9249c32ee4" (UID: "2886ddaa-5c25-4044-8cbb-8a9249c32ee4"). InnerVolumeSpecName "kube-api-access-fw72v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.178649 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2886ddaa-5c25-4044-8cbb-8a9249c32ee4" (UID: "2886ddaa-5c25-4044-8cbb-8a9249c32ee4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.209536 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.209573 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw72v\" (UniqueName: \"kubernetes.io/projected/2886ddaa-5c25-4044-8cbb-8a9249c32ee4-kube-api-access-fw72v\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.464881 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6sjjg" event={"ID":"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674","Type":"ContainerStarted","Data":"3dcc4c4b805e95b0398a0d8a5d04fa36701678b1156c854a537e115731256f7b"} Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.465211 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6sjjg" event={"ID":"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674","Type":"ContainerStarted","Data":"ed4501d8c31923c5f8b684769d39fed27d70dd5d88066cc09d90b51e84c40c2b"} Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.469556 4932 generic.go:334] "Generic (PLEG): container finished" podID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerID="0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6" exitCode=0 Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.469641 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqj9" event={"ID":"2886ddaa-5c25-4044-8cbb-8a9249c32ee4","Type":"ContainerDied","Data":"0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6"} Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.469760 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbqj9" event={"ID":"2886ddaa-5c25-4044-8cbb-8a9249c32ee4","Type":"ContainerDied","Data":"4f0f3d7e33de3c76345f1a5958ba24721dd9b07ce2a616b0b7addd4a7c55a027"} Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.469789 4932 scope.go:117] "RemoveContainer" containerID="0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.469953 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbqj9" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.488839 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"04747ad4-d988-4d6d-8a2e-ab0e28e2cda0","Type":"ContainerStarted","Data":"c39dac66517fb4cee99160a3e20ac7bcd725df3d3c41e601cfc5ec1cc7dd4982"} Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.489221 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.492509 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6sjjg" podStartSLOduration=1.4924889399999999 podStartE2EDuration="1.49248894s" podCreationTimestamp="2026-03-21 09:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:17:35.484092781 +0000 UTC m=+1159.079291070" watchObservedRunningTime="2026-03-21 09:17:35.49248894 +0000 UTC m=+1159.087687209" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.510757 4932 scope.go:117] "RemoveContainer" containerID="51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.513850 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbqj9"] Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.522880 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbqj9"] Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.530564 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.544980 4932 scope.go:117] "RemoveContainer" containerID="8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.633882 4932 scope.go:117] "RemoveContainer" containerID="0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6" Mar 21 09:17:35 crc kubenswrapper[4932]: E0321 09:17:35.634938 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6\": container with ID starting with 0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6 not found: ID does not exist" containerID="0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.634965 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6"} err="failed to get container status \"0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6\": rpc error: code = NotFound desc = could not find container \"0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6\": container with ID starting with 0525af071c132efb67d5967f790028b3781473a0af1cc2d1fd81d307bfa4aef6 not found: ID does not exist" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.634986 4932 scope.go:117] "RemoveContainer" containerID="51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a" Mar 21 09:17:35 crc kubenswrapper[4932]: E0321 09:17:35.635672 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a\": container with ID starting with 51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a not found: ID does not exist" containerID="51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.635703 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a"} err="failed to get container status \"51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a\": rpc error: code = NotFound desc = could not find container \"51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a\": container with ID starting with 51c872f85eb1d9f09d08d7191d1dd4b0cfc327ea1411f17f7679d3dbd407263a not found: ID does not exist" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.635717 4932 scope.go:117] "RemoveContainer" containerID="8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd" Mar 21 09:17:35 crc kubenswrapper[4932]: E0321 09:17:35.636091 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd\": container with ID starting with 8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd not found: ID does not exist" containerID="8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.636107 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd"} err="failed to get container status \"8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd\": rpc error: code = NotFound desc = could not find container \"8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd\": container with ID starting with 8d08507c7e26f163ed40222d5f2a73a1db133636c9e4f0714466c73a5231f8dd not found: ID does not exist" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.723987 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" path="/var/lib/kubelet/pods/2886ddaa-5c25-4044-8cbb-8a9249c32ee4/volumes" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.870840 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:35 crc kubenswrapper[4932]: I0321 09:17:35.926188 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-ml8g9"] Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.500602 4932 generic.go:334] "Generic (PLEG): container finished" podID="2e0e5c16-546b-4203-bf4d-ffdbb0eaf674" containerID="3dcc4c4b805e95b0398a0d8a5d04fa36701678b1156c854a537e115731256f7b" exitCode=0 Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.500742 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6sjjg" event={"ID":"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674","Type":"ContainerDied","Data":"3dcc4c4b805e95b0398a0d8a5d04fa36701678b1156c854a537e115731256f7b"} Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.500821 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" podUID="c5e5ca63-527b-4907-9e46-e11690797d6f" containerName="dnsmasq-dns" containerID="cri-o://aeb89cb1704e5221248de580a11f4bf399dc42f33c4506f75c5a0edb312d35e4" gracePeriod=10 Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.738031 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-f2cbd"] Mar 21 09:17:36 crc kubenswrapper[4932]: E0321 09:17:36.738533 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerName="extract-content" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.738557 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerName="extract-content" Mar 21 09:17:36 crc kubenswrapper[4932]: E0321 09:17:36.738594 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerName="extract-utilities" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.738606 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerName="extract-utilities" Mar 21 09:17:36 crc kubenswrapper[4932]: E0321 09:17:36.738631 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerName="registry-server" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.738640 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerName="registry-server" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.738847 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2886ddaa-5c25-4044-8cbb-8a9249c32ee4" containerName="registry-server" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.739589 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.756086 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-f2cbd"] Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.838302 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a3d247-2ac1-4abe-9372-38b3a73d970c-operator-scripts\") pod \"keystone-db-create-f2cbd\" (UID: \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\") " pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.838731 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ctnn\" (UniqueName: \"kubernetes.io/projected/b4a3d247-2ac1-4abe-9372-38b3a73d970c-kube-api-access-6ctnn\") pod \"keystone-db-create-f2cbd\" (UID: \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\") " pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.839024 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a3d8-account-create-update-kh2n5"] Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.840430 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.842455 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.848005 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a3d8-account-create-update-kh2n5"] Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.941465 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a3d247-2ac1-4abe-9372-38b3a73d970c-operator-scripts\") pod \"keystone-db-create-f2cbd\" (UID: \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\") " pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.941543 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-operator-scripts\") pod \"keystone-a3d8-account-create-update-kh2n5\" (UID: \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\") " pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.941615 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ctnn\" (UniqueName: \"kubernetes.io/projected/b4a3d247-2ac1-4abe-9372-38b3a73d970c-kube-api-access-6ctnn\") pod \"keystone-db-create-f2cbd\" (UID: \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\") " pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.941675 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxnm\" (UniqueName: \"kubernetes.io/projected/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-kube-api-access-4jxnm\") pod \"keystone-a3d8-account-create-update-kh2n5\" (UID: \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\") " pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.942800 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a3d247-2ac1-4abe-9372-38b3a73d970c-operator-scripts\") pod \"keystone-db-create-f2cbd\" (UID: \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\") " pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.969154 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ctnn\" (UniqueName: \"kubernetes.io/projected/b4a3d247-2ac1-4abe-9372-38b3a73d970c-kube-api-access-6ctnn\") pod \"keystone-db-create-f2cbd\" (UID: \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\") " pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.980306 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-bqqzc"] Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.982009 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:36 crc kubenswrapper[4932]: I0321 09:17:36.988612 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bqqzc"] Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.043465 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ad9d2-3270-4462-bcd2-332f735c1dc7-operator-scripts\") pod \"placement-db-create-bqqzc\" (UID: \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\") " pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.043549 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-operator-scripts\") pod \"keystone-a3d8-account-create-update-kh2n5\" (UID: \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\") " pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.043639 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8g2j\" (UniqueName: \"kubernetes.io/projected/b33ad9d2-3270-4462-bcd2-332f735c1dc7-kube-api-access-w8g2j\") pod \"placement-db-create-bqqzc\" (UID: \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\") " pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.043666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxnm\" (UniqueName: \"kubernetes.io/projected/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-kube-api-access-4jxnm\") pod \"keystone-a3d8-account-create-update-kh2n5\" (UID: \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\") " pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.044474 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-operator-scripts\") pod \"keystone-a3d8-account-create-update-kh2n5\" (UID: \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\") " pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.062256 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.062602 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxnm\" (UniqueName: \"kubernetes.io/projected/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-kube-api-access-4jxnm\") pod \"keystone-a3d8-account-create-update-kh2n5\" (UID: \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\") " pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.102881 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e324-account-create-update-dwb42"] Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.105721 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.110657 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.116332 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e324-account-create-update-dwb42"] Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.146252 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ad9d2-3270-4462-bcd2-332f735c1dc7-operator-scripts\") pod \"placement-db-create-bqqzc\" (UID: \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\") " pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.146499 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8g2j\" (UniqueName: \"kubernetes.io/projected/b33ad9d2-3270-4462-bcd2-332f735c1dc7-kube-api-access-w8g2j\") pod \"placement-db-create-bqqzc\" (UID: \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\") " pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.146958 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ad9d2-3270-4462-bcd2-332f735c1dc7-operator-scripts\") pod \"placement-db-create-bqqzc\" (UID: \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\") " pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.157325 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.164014 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8g2j\" (UniqueName: \"kubernetes.io/projected/b33ad9d2-3270-4462-bcd2-332f735c1dc7-kube-api-access-w8g2j\") pod \"placement-db-create-bqqzc\" (UID: \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\") " pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.252804 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcbs\" (UniqueName: \"kubernetes.io/projected/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-kube-api-access-kwcbs\") pod \"placement-e324-account-create-update-dwb42\" (UID: \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\") " pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.253319 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-operator-scripts\") pod \"placement-e324-account-create-update-dwb42\" (UID: \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\") " pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.338461 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.357439 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-operator-scripts\") pod \"placement-e324-account-create-update-dwb42\" (UID: \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\") " pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.357493 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwcbs\" (UniqueName: \"kubernetes.io/projected/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-kube-api-access-kwcbs\") pod \"placement-e324-account-create-update-dwb42\" (UID: \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\") " pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.358811 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-operator-scripts\") pod \"placement-e324-account-create-update-dwb42\" (UID: \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\") " pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.377672 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwcbs\" (UniqueName: \"kubernetes.io/projected/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-kube-api-access-kwcbs\") pod \"placement-e324-account-create-update-dwb42\" (UID: \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\") " pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.454192 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.520557 4932 generic.go:334] "Generic (PLEG): container finished" podID="c5e5ca63-527b-4907-9e46-e11690797d6f" containerID="aeb89cb1704e5221248de580a11f4bf399dc42f33c4506f75c5a0edb312d35e4" exitCode=0 Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.520646 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" event={"ID":"c5e5ca63-527b-4907-9e46-e11690797d6f","Type":"ContainerDied","Data":"aeb89cb1704e5221248de580a11f4bf399dc42f33c4506f75c5a0edb312d35e4"} Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.531729 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-f2cbd"] Mar 21 09:17:37 crc kubenswrapper[4932]: W0321 09:17:37.568396 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4a3d247_2ac1_4abe_9372_38b3a73d970c.slice/crio-df187e6168ba12d990787f18bc8f8b44aa1949599e1e965560537a081731b958 WatchSource:0}: Error finding container df187e6168ba12d990787f18bc8f8b44aa1949599e1e965560537a081731b958: Status 404 returned error can't find the container with id df187e6168ba12d990787f18bc8f8b44aa1949599e1e965560537a081731b958 Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.741969 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a3d8-account-create-update-kh2n5"] Mar 21 09:17:37 crc kubenswrapper[4932]: W0321 09:17:37.748960 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c827e78_d37f_4c9c_aa0b_7f7aa0b51a17.slice/crio-5a43e8771da1c954c7dca0a650e8f2c3476a928db795b904ea2ff34d9d128ea5 WatchSource:0}: Error finding container 5a43e8771da1c954c7dca0a650e8f2c3476a928db795b904ea2ff34d9d128ea5: Status 404 returned error can't find the container with id 5a43e8771da1c954c7dca0a650e8f2c3476a928db795b904ea2ff34d9d128ea5 Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.879659 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bqqzc"] Mar 21 09:17:37 crc kubenswrapper[4932]: W0321 09:17:37.884373 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb33ad9d2_3270_4462_bcd2_332f735c1dc7.slice/crio-e4316652d49ca755f68d213be190bb7d433c196c6e67961ef36d0c5612018b26 WatchSource:0}: Error finding container e4316652d49ca755f68d213be190bb7d433c196c6e67961ef36d0c5612018b26: Status 404 returned error can't find the container with id e4316652d49ca755f68d213be190bb7d433c196c6e67961ef36d0c5612018b26 Mar 21 09:17:37 crc kubenswrapper[4932]: I0321 09:17:37.949678 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:38 crc kubenswrapper[4932]: W0321 09:17:38.032968 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98addc41_3e50_4aa5_88fd_b6b68dc6c4c9.slice/crio-50fa177b4be9dbf7e4821d5971af5131eb7d96a3efb2dd4b6860e991feb959cc WatchSource:0}: Error finding container 50fa177b4be9dbf7e4821d5971af5131eb7d96a3efb2dd4b6860e991feb959cc: Status 404 returned error can't find the container with id 50fa177b4be9dbf7e4821d5971af5131eb7d96a3efb2dd4b6860e991feb959cc Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.041855 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e324-account-create-update-dwb42"] Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.068518 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpq98\" (UniqueName: \"kubernetes.io/projected/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-kube-api-access-kpq98\") pod \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\" (UID: \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\") " Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.068631 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-operator-scripts\") pod \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\" (UID: \"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674\") " Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.069061 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e0e5c16-546b-4203-bf4d-ffdbb0eaf674" (UID: "2e0e5c16-546b-4203-bf4d-ffdbb0eaf674"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.075832 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-kube-api-access-kpq98" (OuterVolumeSpecName: "kube-api-access-kpq98") pod "2e0e5c16-546b-4203-bf4d-ffdbb0eaf674" (UID: "2e0e5c16-546b-4203-bf4d-ffdbb0eaf674"). InnerVolumeSpecName "kube-api-access-kpq98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.170922 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpq98\" (UniqueName: \"kubernetes.io/projected/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-kube-api-access-kpq98\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.170969 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.346891 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.386734 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-pwgtb"] Mar 21 09:17:38 crc kubenswrapper[4932]: E0321 09:17:38.387101 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0e5c16-546b-4203-bf4d-ffdbb0eaf674" containerName="mariadb-account-create-update" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.387116 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0e5c16-546b-4203-bf4d-ffdbb0eaf674" containerName="mariadb-account-create-update" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.387300 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0e5c16-546b-4203-bf4d-ffdbb0eaf674" containerName="mariadb-account-create-update" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.388167 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.427427 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-pwgtb"] Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.446830 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-mxvmb"] Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.448040 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.473773 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-mxvmb"] Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.475946 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.476057 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-dns-svc\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.476089 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-config\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.476115 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.476147 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsf72\" (UniqueName: \"kubernetes.io/projected/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-kube-api-access-lsf72\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.531928 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e324-account-create-update-dwb42" event={"ID":"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9","Type":"ContainerStarted","Data":"50fa177b4be9dbf7e4821d5971af5131eb7d96a3efb2dd4b6860e991feb959cc"} Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.532772 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a3d8-account-create-update-kh2n5" event={"ID":"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17","Type":"ContainerStarted","Data":"5a43e8771da1c954c7dca0a650e8f2c3476a928db795b904ea2ff34d9d128ea5"} Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.533510 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-f2cbd" event={"ID":"b4a3d247-2ac1-4abe-9372-38b3a73d970c","Type":"ContainerStarted","Data":"df187e6168ba12d990787f18bc8f8b44aa1949599e1e965560537a081731b958"} Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.540197 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6sjjg" event={"ID":"2e0e5c16-546b-4203-bf4d-ffdbb0eaf674","Type":"ContainerDied","Data":"ed4501d8c31923c5f8b684769d39fed27d70dd5d88066cc09d90b51e84c40c2b"} Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.540241 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4501d8c31923c5f8b684769d39fed27d70dd5d88066cc09d90b51e84c40c2b" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.540303 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6sjjg" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.556658 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bqqzc" event={"ID":"b33ad9d2-3270-4462-bcd2-332f735c1dc7","Type":"ContainerStarted","Data":"e4316652d49ca755f68d213be190bb7d433c196c6e67961ef36d0c5612018b26"} Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.566039 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-cb17-account-create-update-98rqh"] Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.567912 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.578547 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-cb17-account-create-update-98rqh"] Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.579607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.579694 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsf72\" (UniqueName: \"kubernetes.io/projected/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-kube-api-access-lsf72\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.582528 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.583901 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.585126 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.585189 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crrg6\" (UniqueName: \"kubernetes.io/projected/ce0c083c-1e70-4300-b533-4adbba1989e2-kube-api-access-crrg6\") pod \"watcher-db-create-mxvmb\" (UID: \"ce0c083c-1e70-4300-b533-4adbba1989e2\") " pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.585324 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0c083c-1e70-4300-b533-4adbba1989e2-operator-scripts\") pod \"watcher-db-create-mxvmb\" (UID: \"ce0c083c-1e70-4300-b533-4adbba1989e2\") " pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.585413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-dns-svc\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.585467 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-config\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.586268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-config\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.586930 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.588031 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-dns-svc\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.614324 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsf72\" (UniqueName: \"kubernetes.io/projected/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-kube-api-access-lsf72\") pod \"dnsmasq-dns-7d95ff5b97-pwgtb\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.690636 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0c083c-1e70-4300-b533-4adbba1989e2-operator-scripts\") pod \"watcher-db-create-mxvmb\" (UID: \"ce0c083c-1e70-4300-b533-4adbba1989e2\") " pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.691121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9klf\" (UniqueName: \"kubernetes.io/projected/f7962d98-3f35-4702-8035-a15b0b2223c8-kube-api-access-r9klf\") pod \"watcher-cb17-account-create-update-98rqh\" (UID: \"f7962d98-3f35-4702-8035-a15b0b2223c8\") " pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.691158 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7962d98-3f35-4702-8035-a15b0b2223c8-operator-scripts\") pod \"watcher-cb17-account-create-update-98rqh\" (UID: \"f7962d98-3f35-4702-8035-a15b0b2223c8\") " pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.691239 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crrg6\" (UniqueName: \"kubernetes.io/projected/ce0c083c-1e70-4300-b533-4adbba1989e2-kube-api-access-crrg6\") pod \"watcher-db-create-mxvmb\" (UID: \"ce0c083c-1e70-4300-b533-4adbba1989e2\") " pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.692373 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0c083c-1e70-4300-b533-4adbba1989e2-operator-scripts\") pod \"watcher-db-create-mxvmb\" (UID: \"ce0c083c-1e70-4300-b533-4adbba1989e2\") " pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.723040 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crrg6\" (UniqueName: \"kubernetes.io/projected/ce0c083c-1e70-4300-b533-4adbba1989e2-kube-api-access-crrg6\") pod \"watcher-db-create-mxvmb\" (UID: \"ce0c083c-1e70-4300-b533-4adbba1989e2\") " pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.723475 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.768720 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.794120 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9klf\" (UniqueName: \"kubernetes.io/projected/f7962d98-3f35-4702-8035-a15b0b2223c8-kube-api-access-r9klf\") pod \"watcher-cb17-account-create-update-98rqh\" (UID: \"f7962d98-3f35-4702-8035-a15b0b2223c8\") " pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.794163 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7962d98-3f35-4702-8035-a15b0b2223c8-operator-scripts\") pod \"watcher-cb17-account-create-update-98rqh\" (UID: \"f7962d98-3f35-4702-8035-a15b0b2223c8\") " pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.795578 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7962d98-3f35-4702-8035-a15b0b2223c8-operator-scripts\") pod \"watcher-cb17-account-create-update-98rqh\" (UID: \"f7962d98-3f35-4702-8035-a15b0b2223c8\") " pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.816660 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9klf\" (UniqueName: \"kubernetes.io/projected/f7962d98-3f35-4702-8035-a15b0b2223c8-kube-api-access-r9klf\") pod \"watcher-cb17-account-create-update-98rqh\" (UID: \"f7962d98-3f35-4702-8035-a15b0b2223c8\") " pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.900125 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:38 crc kubenswrapper[4932]: I0321 09:17:38.984582 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.098687 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-dns-svc\") pod \"c5e5ca63-527b-4907-9e46-e11690797d6f\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.098824 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-config\") pod \"c5e5ca63-527b-4907-9e46-e11690797d6f\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.098855 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-ovsdbserver-nb\") pod \"c5e5ca63-527b-4907-9e46-e11690797d6f\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.098959 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td5mh\" (UniqueName: \"kubernetes.io/projected/c5e5ca63-527b-4907-9e46-e11690797d6f-kube-api-access-td5mh\") pod \"c5e5ca63-527b-4907-9e46-e11690797d6f\" (UID: \"c5e5ca63-527b-4907-9e46-e11690797d6f\") " Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.108666 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e5ca63-527b-4907-9e46-e11690797d6f-kube-api-access-td5mh" (OuterVolumeSpecName: "kube-api-access-td5mh") pod "c5e5ca63-527b-4907-9e46-e11690797d6f" (UID: "c5e5ca63-527b-4907-9e46-e11690797d6f"). InnerVolumeSpecName "kube-api-access-td5mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.162017 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5e5ca63-527b-4907-9e46-e11690797d6f" (UID: "c5e5ca63-527b-4907-9e46-e11690797d6f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.174927 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-config" (OuterVolumeSpecName: "config") pod "c5e5ca63-527b-4907-9e46-e11690797d6f" (UID: "c5e5ca63-527b-4907-9e46-e11690797d6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.175037 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5e5ca63-527b-4907-9e46-e11690797d6f" (UID: "c5e5ca63-527b-4907-9e46-e11690797d6f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.201514 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.201549 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.201562 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5e5ca63-527b-4907-9e46-e11690797d6f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.201577 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td5mh\" (UniqueName: \"kubernetes.io/projected/c5e5ca63-527b-4907-9e46-e11690797d6f-kube-api-access-td5mh\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.267694 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-mxvmb"] Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.311098 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-pwgtb"] Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.545474 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-cb17-account-create-update-98rqh"] Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.575610 4932 generic.go:334] "Generic (PLEG): container finished" podID="b33ad9d2-3270-4462-bcd2-332f735c1dc7" containerID="e35437e3e0f00f1a0ca0aabc7bd5f9c57a1b253b365db5853947bd556a10cf95" exitCode=0 Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.575719 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bqqzc" event={"ID":"b33ad9d2-3270-4462-bcd2-332f735c1dc7","Type":"ContainerDied","Data":"e35437e3e0f00f1a0ca0aabc7bd5f9c57a1b253b365db5853947bd556a10cf95"} Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.579710 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.579708 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bdb65675-ml8g9" event={"ID":"c5e5ca63-527b-4907-9e46-e11690797d6f","Type":"ContainerDied","Data":"9097155ddeabfa564fe5926d052931cc968a8fd8a0151356a52be4c728622c83"} Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.580301 4932 scope.go:117] "RemoveContainer" containerID="aeb89cb1704e5221248de580a11f4bf399dc42f33c4506f75c5a0edb312d35e4" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.581242 4932 generic.go:334] "Generic (PLEG): container finished" podID="98addc41-3e50-4aa5-88fd-b6b68dc6c4c9" containerID="3e93259fb62ca4d1bb7ae4c89fc579207149edd5e901ec88f265d0f24d487ae2" exitCode=0 Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.581310 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e324-account-create-update-dwb42" event={"ID":"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9","Type":"ContainerDied","Data":"3e93259fb62ca4d1bb7ae4c89fc579207149edd5e901ec88f265d0f24d487ae2"} Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.583203 4932 generic.go:334] "Generic (PLEG): container finished" podID="6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17" containerID="355e91a60a2e5cc87067699084326e157e2fbf4472e0988044c96a16f7ca903c" exitCode=0 Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.583257 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a3d8-account-create-update-kh2n5" event={"ID":"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17","Type":"ContainerDied","Data":"355e91a60a2e5cc87067699084326e157e2fbf4472e0988044c96a16f7ca903c"} Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.593788 4932 generic.go:334] "Generic (PLEG): container finished" podID="b4a3d247-2ac1-4abe-9372-38b3a73d970c" containerID="0817c6307ba5c2e366515dd0cf717b87035466bc4ab3004424efd207440f58c8" exitCode=0 Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.593847 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-f2cbd" event={"ID":"b4a3d247-2ac1-4abe-9372-38b3a73d970c","Type":"ContainerDied","Data":"0817c6307ba5c2e366515dd0cf717b87035466bc4ab3004424efd207440f58c8"} Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.613716 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 21 09:17:39 crc kubenswrapper[4932]: E0321 09:17:39.614089 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e5ca63-527b-4907-9e46-e11690797d6f" containerName="init" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.614104 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e5ca63-527b-4907-9e46-e11690797d6f" containerName="init" Mar 21 09:17:39 crc kubenswrapper[4932]: E0321 09:17:39.614118 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e5ca63-527b-4907-9e46-e11690797d6f" containerName="dnsmasq-dns" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.614125 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e5ca63-527b-4907-9e46-e11690797d6f" containerName="dnsmasq-dns" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.614317 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e5ca63-527b-4907-9e46-e11690797d6f" containerName="dnsmasq-dns" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.620018 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.623901 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.624056 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.624171 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.625658 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-bczmt" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.653677 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.712603 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a350804d-f44d-4a1c-b748-24af07a9e811-cache\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.712700 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.712762 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.713003 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350804d-f44d-4a1c-b748-24af07a9e811-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.713073 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a350804d-f44d-4a1c-b748-24af07a9e811-lock\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.713191 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7xh9\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-kube-api-access-z7xh9\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.723265 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-ml8g9"] Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.723329 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bdb65675-ml8g9"] Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.823030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7xh9\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-kube-api-access-z7xh9\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.823112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a350804d-f44d-4a1c-b748-24af07a9e811-cache\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.823150 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.823217 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.823274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350804d-f44d-4a1c-b748-24af07a9e811-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.823300 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a350804d-f44d-4a1c-b748-24af07a9e811-lock\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.824057 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a350804d-f44d-4a1c-b748-24af07a9e811-lock\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.824524 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.824739 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a350804d-f44d-4a1c-b748-24af07a9e811-cache\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: E0321 09:17:39.825083 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 21 09:17:39 crc kubenswrapper[4932]: E0321 09:17:39.825114 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 21 09:17:39 crc kubenswrapper[4932]: E0321 09:17:39.825509 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift podName:a350804d-f44d-4a1c-b748-24af07a9e811 nodeName:}" failed. No retries permitted until 2026-03-21 09:17:40.325339845 +0000 UTC m=+1163.920538114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift") pod "swift-storage-0" (UID: "a350804d-f44d-4a1c-b748-24af07a9e811") : configmap "swift-ring-files" not found Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.835746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350804d-f44d-4a1c-b748-24af07a9e811-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.840391 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rxcdd"] Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.845587 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.849568 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.850225 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.852722 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.861965 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rxcdd"] Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.872212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7xh9\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-kube-api-access-z7xh9\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.893553 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.925277 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjdkj\" (UniqueName: \"kubernetes.io/projected/59f360d4-96d6-4693-a5f8-2473c4d55eca-kube-api-access-zjdkj\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.925389 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-ring-data-devices\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.925409 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-swiftconf\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.925432 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-dispersionconf\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.925459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f360d4-96d6-4693-a5f8-2473c4d55eca-etc-swift\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.925485 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-scripts\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:39 crc kubenswrapper[4932]: I0321 09:17:39.925512 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-combined-ca-bundle\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.027390 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-ring-data-devices\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.027446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-swiftconf\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.027486 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-dispersionconf\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.027513 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f360d4-96d6-4693-a5f8-2473c4d55eca-etc-swift\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.027545 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-scripts\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.027575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-combined-ca-bundle\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.027648 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjdkj\" (UniqueName: \"kubernetes.io/projected/59f360d4-96d6-4693-a5f8-2473c4d55eca-kube-api-access-zjdkj\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.028261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f360d4-96d6-4693-a5f8-2473c4d55eca-etc-swift\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.028440 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-ring-data-devices\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.028443 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-scripts\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.032642 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-combined-ca-bundle\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.032656 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-swiftconf\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.044743 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-dispersionconf\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.045402 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjdkj\" (UniqueName: \"kubernetes.io/projected/59f360d4-96d6-4693-a5f8-2473c4d55eca-kube-api-access-zjdkj\") pod \"swift-ring-rebalance-rxcdd\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.251642 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.337528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:40 crc kubenswrapper[4932]: E0321 09:17:40.337749 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 21 09:17:40 crc kubenswrapper[4932]: E0321 09:17:40.337775 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 21 09:17:40 crc kubenswrapper[4932]: E0321 09:17:40.337838 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift podName:a350804d-f44d-4a1c-b748-24af07a9e811 nodeName:}" failed. No retries permitted until 2026-03-21 09:17:41.337817067 +0000 UTC m=+1164.933015336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift") pod "swift-storage-0" (UID: "a350804d-f44d-4a1c-b748-24af07a9e811") : configmap "swift-ring-files" not found Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.946018 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5g82r"] Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.948078 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5g82r" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.964113 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9550-account-create-update-x8bqm"] Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.965285 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.967745 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 21 09:17:40 crc kubenswrapper[4932]: I0321 09:17:40.984922 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5g82r"] Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.004834 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9550-account-create-update-x8bqm"] Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.051699 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c69b6abf-7357-49bd-8f5c-0fe8b1382238-operator-scripts\") pod \"glance-db-create-5g82r\" (UID: \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\") " pod="openstack/glance-db-create-5g82r" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.051749 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4d7l\" (UniqueName: \"kubernetes.io/projected/041ac472-88ec-4f81-8e91-a0c80ed96a97-kube-api-access-j4d7l\") pod \"glance-9550-account-create-update-x8bqm\" (UID: \"041ac472-88ec-4f81-8e91-a0c80ed96a97\") " pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.051836 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg6q2\" (UniqueName: \"kubernetes.io/projected/c69b6abf-7357-49bd-8f5c-0fe8b1382238-kube-api-access-mg6q2\") pod \"glance-db-create-5g82r\" (UID: \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\") " pod="openstack/glance-db-create-5g82r" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.051862 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041ac472-88ec-4f81-8e91-a0c80ed96a97-operator-scripts\") pod \"glance-9550-account-create-update-x8bqm\" (UID: \"041ac472-88ec-4f81-8e91-a0c80ed96a97\") " pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.153908 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg6q2\" (UniqueName: \"kubernetes.io/projected/c69b6abf-7357-49bd-8f5c-0fe8b1382238-kube-api-access-mg6q2\") pod \"glance-db-create-5g82r\" (UID: \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\") " pod="openstack/glance-db-create-5g82r" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.153957 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041ac472-88ec-4f81-8e91-a0c80ed96a97-operator-scripts\") pod \"glance-9550-account-create-update-x8bqm\" (UID: \"041ac472-88ec-4f81-8e91-a0c80ed96a97\") " pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.154055 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c69b6abf-7357-49bd-8f5c-0fe8b1382238-operator-scripts\") pod \"glance-db-create-5g82r\" (UID: \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\") " pod="openstack/glance-db-create-5g82r" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.154079 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4d7l\" (UniqueName: \"kubernetes.io/projected/041ac472-88ec-4f81-8e91-a0c80ed96a97-kube-api-access-j4d7l\") pod \"glance-9550-account-create-update-x8bqm\" (UID: \"041ac472-88ec-4f81-8e91-a0c80ed96a97\") " pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.155031 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c69b6abf-7357-49bd-8f5c-0fe8b1382238-operator-scripts\") pod \"glance-db-create-5g82r\" (UID: \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\") " pod="openstack/glance-db-create-5g82r" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.159222 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041ac472-88ec-4f81-8e91-a0c80ed96a97-operator-scripts\") pod \"glance-9550-account-create-update-x8bqm\" (UID: \"041ac472-88ec-4f81-8e91-a0c80ed96a97\") " pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.171000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg6q2\" (UniqueName: \"kubernetes.io/projected/c69b6abf-7357-49bd-8f5c-0fe8b1382238-kube-api-access-mg6q2\") pod \"glance-db-create-5g82r\" (UID: \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\") " pod="openstack/glance-db-create-5g82r" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.171223 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4d7l\" (UniqueName: \"kubernetes.io/projected/041ac472-88ec-4f81-8e91-a0c80ed96a97-kube-api-access-j4d7l\") pod \"glance-9550-account-create-update-x8bqm\" (UID: \"041ac472-88ec-4f81-8e91-a0c80ed96a97\") " pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.270450 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5g82r" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.291140 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.357251 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:41 crc kubenswrapper[4932]: E0321 09:17:41.357457 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 21 09:17:41 crc kubenswrapper[4932]: E0321 09:17:41.357474 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 21 09:17:41 crc kubenswrapper[4932]: E0321 09:17:41.357521 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift podName:a350804d-f44d-4a1c-b748-24af07a9e811 nodeName:}" failed. No retries permitted until 2026-03-21 09:17:43.357506028 +0000 UTC m=+1166.952704297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift") pod "swift-storage-0" (UID: "a350804d-f44d-4a1c-b748-24af07a9e811") : configmap "swift-ring-files" not found Mar 21 09:17:41 crc kubenswrapper[4932]: I0321 09:17:41.713278 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e5ca63-527b-4907-9e46-e11690797d6f" path="/var/lib/kubelet/pods/c5e5ca63-527b-4907-9e46-e11690797d6f/volumes" Mar 21 09:17:42 crc kubenswrapper[4932]: W0321 09:17:42.353139 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec2d9a5c_8ffe_4f2d_a407_2f211230c4e0.slice/crio-63fed95f74552f54342f03b9646f8eeeb20f5a2d87b255a6d207170f3744c2e6 WatchSource:0}: Error finding container 63fed95f74552f54342f03b9646f8eeeb20f5a2d87b255a6d207170f3744c2e6: Status 404 returned error can't find the container with id 63fed95f74552f54342f03b9646f8eeeb20f5a2d87b255a6d207170f3744c2e6 Mar 21 09:17:42 crc kubenswrapper[4932]: W0321 09:17:42.354683 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce0c083c_1e70_4300_b533_4adbba1989e2.slice/crio-ef3b1022731caf0b7cab38122cd132066ab1c43888fe6f5f23b13c3aabb9319c WatchSource:0}: Error finding container ef3b1022731caf0b7cab38122cd132066ab1c43888fe6f5f23b13c3aabb9319c: Status 404 returned error can't find the container with id ef3b1022731caf0b7cab38122cd132066ab1c43888fe6f5f23b13c3aabb9319c Mar 21 09:17:42 crc kubenswrapper[4932]: W0321 09:17:42.355136 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7962d98_3f35_4702_8035_a15b0b2223c8.slice/crio-c9be0927a4a784adcb74b0f7f6dc53d83bd7da2f15d3fbd026d9b0b4358f51b3 WatchSource:0}: Error finding container c9be0927a4a784adcb74b0f7f6dc53d83bd7da2f15d3fbd026d9b0b4358f51b3: Status 404 returned error can't find the container with id c9be0927a4a784adcb74b0f7f6dc53d83bd7da2f15d3fbd026d9b0b4358f51b3 Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.359392 4932 scope.go:117] "RemoveContainer" containerID="5546b55ae8ccdf4050a92998ec00e225cdf096a62fd7a15092da87cad675c5a7" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.634561 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.664825 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-f2cbd" event={"ID":"b4a3d247-2ac1-4abe-9372-38b3a73d970c","Type":"ContainerDied","Data":"df187e6168ba12d990787f18bc8f8b44aa1949599e1e965560537a081731b958"} Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.665130 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df187e6168ba12d990787f18bc8f8b44aa1949599e1e965560537a081731b958" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.666279 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" event={"ID":"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0","Type":"ContainerStarted","Data":"63fed95f74552f54342f03b9646f8eeeb20f5a2d87b255a6d207170f3744c2e6"} Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.669174 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-mxvmb" event={"ID":"ce0c083c-1e70-4300-b533-4adbba1989e2","Type":"ContainerStarted","Data":"ef3b1022731caf0b7cab38122cd132066ab1c43888fe6f5f23b13c3aabb9319c"} Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.670692 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-cb17-account-create-update-98rqh" event={"ID":"f7962d98-3f35-4702-8035-a15b0b2223c8","Type":"ContainerStarted","Data":"c9be0927a4a784adcb74b0f7f6dc53d83bd7da2f15d3fbd026d9b0b4358f51b3"} Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.673891 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bqqzc" event={"ID":"b33ad9d2-3270-4462-bcd2-332f735c1dc7","Type":"ContainerDied","Data":"e4316652d49ca755f68d213be190bb7d433c196c6e67961ef36d0c5612018b26"} Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.673916 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4316652d49ca755f68d213be190bb7d433c196c6e67961ef36d0c5612018b26" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.678003 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e324-account-create-update-dwb42" event={"ID":"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9","Type":"ContainerDied","Data":"50fa177b4be9dbf7e4821d5971af5131eb7d96a3efb2dd4b6860e991feb959cc"} Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.678104 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50fa177b4be9dbf7e4821d5971af5131eb7d96a3efb2dd4b6860e991feb959cc" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.679624 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a3d8-account-create-update-kh2n5" event={"ID":"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17","Type":"ContainerDied","Data":"5a43e8771da1c954c7dca0a650e8f2c3476a928db795b904ea2ff34d9d128ea5"} Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.679652 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a43e8771da1c954c7dca0a650e8f2c3476a928db795b904ea2ff34d9d128ea5" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.679712 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a3d8-account-create-update-kh2n5" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.682788 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jxnm\" (UniqueName: \"kubernetes.io/projected/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-kube-api-access-4jxnm\") pod \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\" (UID: \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\") " Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.687979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-kube-api-access-4jxnm" (OuterVolumeSpecName: "kube-api-access-4jxnm") pod "6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17" (UID: "6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17"). InnerVolumeSpecName "kube-api-access-4jxnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.729031 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.771037 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.784675 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwcbs\" (UniqueName: \"kubernetes.io/projected/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-kube-api-access-kwcbs\") pod \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\" (UID: \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\") " Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.784724 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-operator-scripts\") pod \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\" (UID: \"98addc41-3e50-4aa5-88fd-b6b68dc6c4c9\") " Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.784779 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-operator-scripts\") pod \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\" (UID: \"6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17\") " Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.785275 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jxnm\" (UniqueName: \"kubernetes.io/projected/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-kube-api-access-4jxnm\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.786342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17" (UID: "6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.786699 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98addc41-3e50-4aa5-88fd-b6b68dc6c4c9" (UID: "98addc41-3e50-4aa5-88fd-b6b68dc6c4c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.799812 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.802055 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-kube-api-access-kwcbs" (OuterVolumeSpecName: "kube-api-access-kwcbs") pod "98addc41-3e50-4aa5-88fd-b6b68dc6c4c9" (UID: "98addc41-3e50-4aa5-88fd-b6b68dc6c4c9"). InnerVolumeSpecName "kube-api-access-kwcbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.837241 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6sjjg"] Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.852289 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6sjjg"] Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.886583 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ctnn\" (UniqueName: \"kubernetes.io/projected/b4a3d247-2ac1-4abe-9372-38b3a73d970c-kube-api-access-6ctnn\") pod \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\" (UID: \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\") " Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.886647 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ad9d2-3270-4462-bcd2-332f735c1dc7-operator-scripts\") pod \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\" (UID: \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\") " Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.886779 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8g2j\" (UniqueName: \"kubernetes.io/projected/b33ad9d2-3270-4462-bcd2-332f735c1dc7-kube-api-access-w8g2j\") pod \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\" (UID: \"b33ad9d2-3270-4462-bcd2-332f735c1dc7\") " Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.886859 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a3d247-2ac1-4abe-9372-38b3a73d970c-operator-scripts\") pod \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\" (UID: \"b4a3d247-2ac1-4abe-9372-38b3a73d970c\") " Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.887256 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwcbs\" (UniqueName: \"kubernetes.io/projected/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-kube-api-access-kwcbs\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.887268 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.887277 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.888285 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33ad9d2-3270-4462-bcd2-332f735c1dc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b33ad9d2-3270-4462-bcd2-332f735c1dc7" (UID: "b33ad9d2-3270-4462-bcd2-332f735c1dc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.889048 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a3d247-2ac1-4abe-9372-38b3a73d970c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4a3d247-2ac1-4abe-9372-38b3a73d970c" (UID: "b4a3d247-2ac1-4abe-9372-38b3a73d970c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.898600 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33ad9d2-3270-4462-bcd2-332f735c1dc7-kube-api-access-w8g2j" (OuterVolumeSpecName: "kube-api-access-w8g2j") pod "b33ad9d2-3270-4462-bcd2-332f735c1dc7" (UID: "b33ad9d2-3270-4462-bcd2-332f735c1dc7"). InnerVolumeSpecName "kube-api-access-w8g2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.905944 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a3d247-2ac1-4abe-9372-38b3a73d970c-kube-api-access-6ctnn" (OuterVolumeSpecName: "kube-api-access-6ctnn") pod "b4a3d247-2ac1-4abe-9372-38b3a73d970c" (UID: "b4a3d247-2ac1-4abe-9372-38b3a73d970c"). InnerVolumeSpecName "kube-api-access-6ctnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.911519 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-b5vrt"] Mar 21 09:17:42 crc kubenswrapper[4932]: E0321 09:17:42.911867 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a3d247-2ac1-4abe-9372-38b3a73d970c" containerName="mariadb-database-create" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.911879 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a3d247-2ac1-4abe-9372-38b3a73d970c" containerName="mariadb-database-create" Mar 21 09:17:42 crc kubenswrapper[4932]: E0321 09:17:42.911892 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33ad9d2-3270-4462-bcd2-332f735c1dc7" containerName="mariadb-database-create" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.911897 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33ad9d2-3270-4462-bcd2-332f735c1dc7" containerName="mariadb-database-create" Mar 21 09:17:42 crc kubenswrapper[4932]: E0321 09:17:42.911914 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98addc41-3e50-4aa5-88fd-b6b68dc6c4c9" containerName="mariadb-account-create-update" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.911922 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="98addc41-3e50-4aa5-88fd-b6b68dc6c4c9" containerName="mariadb-account-create-update" Mar 21 09:17:42 crc kubenswrapper[4932]: E0321 09:17:42.911933 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17" containerName="mariadb-account-create-update" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.911938 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17" containerName="mariadb-account-create-update" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.912106 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a3d247-2ac1-4abe-9372-38b3a73d970c" containerName="mariadb-database-create" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.912121 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17" containerName="mariadb-account-create-update" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.912133 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="98addc41-3e50-4aa5-88fd-b6b68dc6c4c9" containerName="mariadb-account-create-update" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.912141 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33ad9d2-3270-4462-bcd2-332f735c1dc7" containerName="mariadb-database-create" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.915966 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.918490 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.938881 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5vrt"] Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.972163 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9550-account-create-update-x8bqm"] Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.992199 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rgbc\" (UniqueName: \"kubernetes.io/projected/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-kube-api-access-6rgbc\") pod \"root-account-create-update-b5vrt\" (UID: \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\") " pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.992318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-operator-scripts\") pod \"root-account-create-update-b5vrt\" (UID: \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\") " pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.994528 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8g2j\" (UniqueName: \"kubernetes.io/projected/b33ad9d2-3270-4462-bcd2-332f735c1dc7-kube-api-access-w8g2j\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.994559 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4a3d247-2ac1-4abe-9372-38b3a73d970c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.994570 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ctnn\" (UniqueName: \"kubernetes.io/projected/b4a3d247-2ac1-4abe-9372-38b3a73d970c-kube-api-access-6ctnn\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:42 crc kubenswrapper[4932]: I0321 09:17:42.994581 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ad9d2-3270-4462-bcd2-332f735c1dc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.051989 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5g82r"] Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.062220 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rxcdd"] Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.095942 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rgbc\" (UniqueName: \"kubernetes.io/projected/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-kube-api-access-6rgbc\") pod \"root-account-create-update-b5vrt\" (UID: \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\") " pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.096079 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-operator-scripts\") pod \"root-account-create-update-b5vrt\" (UID: \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\") " pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.096879 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-operator-scripts\") pod \"root-account-create-update-b5vrt\" (UID: \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\") " pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.114389 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rgbc\" (UniqueName: \"kubernetes.io/projected/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-kube-api-access-6rgbc\") pod \"root-account-create-update-b5vrt\" (UID: \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\") " pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:43 crc kubenswrapper[4932]: E0321 09:17:43.295265 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce0c083c_1e70_4300_b533_4adbba1989e2.slice/crio-conmon-10c3b20f0ef0400874eb1a7f4971e523a3d1fef15909975d51b9a51247d2ab4a.scope\": RecentStats: unable to find data in memory cache]" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.329749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.401319 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:43 crc kubenswrapper[4932]: E0321 09:17:43.401779 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 21 09:17:43 crc kubenswrapper[4932]: E0321 09:17:43.401795 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 21 09:17:43 crc kubenswrapper[4932]: E0321 09:17:43.401839 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift podName:a350804d-f44d-4a1c-b748-24af07a9e811 nodeName:}" failed. No retries permitted until 2026-03-21 09:17:47.401823631 +0000 UTC m=+1170.997021900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift") pod "swift-storage-0" (UID: "a350804d-f44d-4a1c-b748-24af07a9e811") : configmap "swift-ring-files" not found Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.690124 4932 generic.go:334] "Generic (PLEG): container finished" podID="f7962d98-3f35-4702-8035-a15b0b2223c8" containerID="c0274b3ff01c1b8a82e4031769a3a5328a4577c2a01c7d652b282387a35fce6b" exitCode=0 Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.690194 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-cb17-account-create-update-98rqh" event={"ID":"f7962d98-3f35-4702-8035-a15b0b2223c8","Type":"ContainerDied","Data":"c0274b3ff01c1b8a82e4031769a3a5328a4577c2a01c7d652b282387a35fce6b"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.693405 4932 generic.go:334] "Generic (PLEG): container finished" podID="041ac472-88ec-4f81-8e91-a0c80ed96a97" containerID="cea2e5a77eeea003668f8b6d7459b0c59245690c77eb65a614b2200a754abec7" exitCode=0 Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.693487 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9550-account-create-update-x8bqm" event={"ID":"041ac472-88ec-4f81-8e91-a0c80ed96a97","Type":"ContainerDied","Data":"cea2e5a77eeea003668f8b6d7459b0c59245690c77eb65a614b2200a754abec7"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.693519 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9550-account-create-update-x8bqm" event={"ID":"041ac472-88ec-4f81-8e91-a0c80ed96a97","Type":"ContainerStarted","Data":"326700961ec3b177a735ed6a2ceb41aa7f077d20a71a952b2d59d8acab681e63"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.697269 4932 generic.go:334] "Generic (PLEG): container finished" podID="c69b6abf-7357-49bd-8f5c-0fe8b1382238" containerID="41d9f293f6a7ed39b32f745ab75983e17e7694ed430e13fac7cdc75cb96d1a56" exitCode=0 Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.697315 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5g82r" event={"ID":"c69b6abf-7357-49bd-8f5c-0fe8b1382238","Type":"ContainerDied","Data":"41d9f293f6a7ed39b32f745ab75983e17e7694ed430e13fac7cdc75cb96d1a56"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.697375 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5g82r" event={"ID":"c69b6abf-7357-49bd-8f5c-0fe8b1382238","Type":"ContainerStarted","Data":"9234825ee6a55c13f54b72f4c7cd9c4a6a778ba6e8c17562cab6356de553fb55"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.699035 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rxcdd" event={"ID":"59f360d4-96d6-4693-a5f8-2473c4d55eca","Type":"ContainerStarted","Data":"22ee8815ef7992df02a4cc28dfd20dd5838778b801f260b58df58e2aaa6ca6da"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.708863 4932 generic.go:334] "Generic (PLEG): container finished" podID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" containerID="646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6" exitCode=0 Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.711738 4932 generic.go:334] "Generic (PLEG): container finished" podID="ce0c083c-1e70-4300-b533-4adbba1989e2" containerID="10c3b20f0ef0400874eb1a7f4971e523a3d1fef15909975d51b9a51247d2ab4a" exitCode=0 Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.711855 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e324-account-create-update-dwb42" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.711892 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-f2cbd" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.712066 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bqqzc" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.716676 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0e5c16-546b-4203-bf4d-ffdbb0eaf674" path="/var/lib/kubelet/pods/2e0e5c16-546b-4203-bf4d-ffdbb0eaf674/volumes" Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.735320 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerStarted","Data":"3a06633c76e1f2704b917e24aa79245a033df4c4760636974cda10b880511129"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.735453 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" event={"ID":"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0","Type":"ContainerDied","Data":"646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.735504 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-mxvmb" event={"ID":"ce0c083c-1e70-4300-b533-4adbba1989e2","Type":"ContainerDied","Data":"10c3b20f0ef0400874eb1a7f4971e523a3d1fef15909975d51b9a51247d2ab4a"} Mar 21 09:17:43 crc kubenswrapper[4932]: I0321 09:17:43.872487 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5vrt"] Mar 21 09:17:45 crc kubenswrapper[4932]: W0321 09:17:45.589089 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f0e44dc_9908_4a4f_bc21_18d232f11ec6.slice/crio-e0cd802c43dfc104082058f14830b121c7efd804d871c19b7d68e90f7f98ebd3 WatchSource:0}: Error finding container e0cd802c43dfc104082058f14830b121c7efd804d871c19b7d68e90f7f98ebd3: Status 404 returned error can't find the container with id e0cd802c43dfc104082058f14830b121c7efd804d871c19b7d68e90f7f98ebd3 Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.731634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerStarted","Data":"c96bc826f8d211eb20fcc50df225774b214ec8f2952ac456d9c986a813a944a0"} Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.733804 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-mxvmb" event={"ID":"ce0c083c-1e70-4300-b533-4adbba1989e2","Type":"ContainerDied","Data":"ef3b1022731caf0b7cab38122cd132066ab1c43888fe6f5f23b13c3aabb9319c"} Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.733890 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef3b1022731caf0b7cab38122cd132066ab1c43888fe6f5f23b13c3aabb9319c" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.737264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-cb17-account-create-update-98rqh" event={"ID":"f7962d98-3f35-4702-8035-a15b0b2223c8","Type":"ContainerDied","Data":"c9be0927a4a784adcb74b0f7f6dc53d83bd7da2f15d3fbd026d9b0b4358f51b3"} Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.737326 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9be0927a4a784adcb74b0f7f6dc53d83bd7da2f15d3fbd026d9b0b4358f51b3" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.738637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5vrt" event={"ID":"1f0e44dc-9908-4a4f-bc21-18d232f11ec6","Type":"ContainerStarted","Data":"e0cd802c43dfc104082058f14830b121c7efd804d871c19b7d68e90f7f98ebd3"} Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.740572 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9550-account-create-update-x8bqm" event={"ID":"041ac472-88ec-4f81-8e91-a0c80ed96a97","Type":"ContainerDied","Data":"326700961ec3b177a735ed6a2ceb41aa7f077d20a71a952b2d59d8acab681e63"} Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.740594 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="326700961ec3b177a735ed6a2ceb41aa7f077d20a71a952b2d59d8acab681e63" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.741980 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5g82r" event={"ID":"c69b6abf-7357-49bd-8f5c-0fe8b1382238","Type":"ContainerDied","Data":"9234825ee6a55c13f54b72f4c7cd9c4a6a778ba6e8c17562cab6356de553fb55"} Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.742012 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9234825ee6a55c13f54b72f4c7cd9c4a6a778ba6e8c17562cab6356de553fb55" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.868897 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5g82r" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.905893 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.922551 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.955224 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c69b6abf-7357-49bd-8f5c-0fe8b1382238-operator-scripts\") pod \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\" (UID: \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\") " Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.955267 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg6q2\" (UniqueName: \"kubernetes.io/projected/c69b6abf-7357-49bd-8f5c-0fe8b1382238-kube-api-access-mg6q2\") pod \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\" (UID: \"c69b6abf-7357-49bd-8f5c-0fe8b1382238\") " Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.955335 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041ac472-88ec-4f81-8e91-a0c80ed96a97-operator-scripts\") pod \"041ac472-88ec-4f81-8e91-a0c80ed96a97\" (UID: \"041ac472-88ec-4f81-8e91-a0c80ed96a97\") " Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.955392 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0c083c-1e70-4300-b533-4adbba1989e2-operator-scripts\") pod \"ce0c083c-1e70-4300-b533-4adbba1989e2\" (UID: \"ce0c083c-1e70-4300-b533-4adbba1989e2\") " Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.955442 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4d7l\" (UniqueName: \"kubernetes.io/projected/041ac472-88ec-4f81-8e91-a0c80ed96a97-kube-api-access-j4d7l\") pod \"041ac472-88ec-4f81-8e91-a0c80ed96a97\" (UID: \"041ac472-88ec-4f81-8e91-a0c80ed96a97\") " Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.955606 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crrg6\" (UniqueName: \"kubernetes.io/projected/ce0c083c-1e70-4300-b533-4adbba1989e2-kube-api-access-crrg6\") pod \"ce0c083c-1e70-4300-b533-4adbba1989e2\" (UID: \"ce0c083c-1e70-4300-b533-4adbba1989e2\") " Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.958547 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69b6abf-7357-49bd-8f5c-0fe8b1382238-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c69b6abf-7357-49bd-8f5c-0fe8b1382238" (UID: "c69b6abf-7357-49bd-8f5c-0fe8b1382238"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.959339 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041ac472-88ec-4f81-8e91-a0c80ed96a97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "041ac472-88ec-4f81-8e91-a0c80ed96a97" (UID: "041ac472-88ec-4f81-8e91-a0c80ed96a97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.959402 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce0c083c-1e70-4300-b533-4adbba1989e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce0c083c-1e70-4300-b533-4adbba1989e2" (UID: "ce0c083c-1e70-4300-b533-4adbba1989e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.959901 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69b6abf-7357-49bd-8f5c-0fe8b1382238-kube-api-access-mg6q2" (OuterVolumeSpecName: "kube-api-access-mg6q2") pod "c69b6abf-7357-49bd-8f5c-0fe8b1382238" (UID: "c69b6abf-7357-49bd-8f5c-0fe8b1382238"). InnerVolumeSpecName "kube-api-access-mg6q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.960953 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.962851 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0c083c-1e70-4300-b533-4adbba1989e2-kube-api-access-crrg6" (OuterVolumeSpecName: "kube-api-access-crrg6") pod "ce0c083c-1e70-4300-b533-4adbba1989e2" (UID: "ce0c083c-1e70-4300-b533-4adbba1989e2"). InnerVolumeSpecName "kube-api-access-crrg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:45 crc kubenswrapper[4932]: I0321 09:17:45.964217 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041ac472-88ec-4f81-8e91-a0c80ed96a97-kube-api-access-j4d7l" (OuterVolumeSpecName: "kube-api-access-j4d7l") pod "041ac472-88ec-4f81-8e91-a0c80ed96a97" (UID: "041ac472-88ec-4f81-8e91-a0c80ed96a97"). InnerVolumeSpecName "kube-api-access-j4d7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057132 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7962d98-3f35-4702-8035-a15b0b2223c8-operator-scripts\") pod \"f7962d98-3f35-4702-8035-a15b0b2223c8\" (UID: \"f7962d98-3f35-4702-8035-a15b0b2223c8\") " Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9klf\" (UniqueName: \"kubernetes.io/projected/f7962d98-3f35-4702-8035-a15b0b2223c8-kube-api-access-r9klf\") pod \"f7962d98-3f35-4702-8035-a15b0b2223c8\" (UID: \"f7962d98-3f35-4702-8035-a15b0b2223c8\") " Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7962d98-3f35-4702-8035-a15b0b2223c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7962d98-3f35-4702-8035-a15b0b2223c8" (UID: "f7962d98-3f35-4702-8035-a15b0b2223c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057856 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041ac472-88ec-4f81-8e91-a0c80ed96a97-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057877 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce0c083c-1e70-4300-b533-4adbba1989e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057887 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4d7l\" (UniqueName: \"kubernetes.io/projected/041ac472-88ec-4f81-8e91-a0c80ed96a97-kube-api-access-j4d7l\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057897 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7962d98-3f35-4702-8035-a15b0b2223c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057906 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crrg6\" (UniqueName: \"kubernetes.io/projected/ce0c083c-1e70-4300-b533-4adbba1989e2-kube-api-access-crrg6\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057917 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c69b6abf-7357-49bd-8f5c-0fe8b1382238-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.057928 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg6q2\" (UniqueName: \"kubernetes.io/projected/c69b6abf-7357-49bd-8f5c-0fe8b1382238-kube-api-access-mg6q2\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.060696 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7962d98-3f35-4702-8035-a15b0b2223c8-kube-api-access-r9klf" (OuterVolumeSpecName: "kube-api-access-r9klf") pod "f7962d98-3f35-4702-8035-a15b0b2223c8" (UID: "f7962d98-3f35-4702-8035-a15b0b2223c8"). InnerVolumeSpecName "kube-api-access-r9klf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.159508 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9klf\" (UniqueName: \"kubernetes.io/projected/f7962d98-3f35-4702-8035-a15b0b2223c8-kube-api-access-r9klf\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.752744 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" event={"ID":"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0","Type":"ContainerStarted","Data":"4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62"} Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.753251 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.755108 4932 generic.go:334] "Generic (PLEG): container finished" podID="1f0e44dc-9908-4a4f-bc21-18d232f11ec6" containerID="3e9d12128509faa2a10d7f15e063a5addc1ec79cbb107c2a6e56d447e92cba2c" exitCode=0 Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.755177 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5vrt" event={"ID":"1f0e44dc-9908-4a4f-bc21-18d232f11ec6","Type":"ContainerDied","Data":"3e9d12128509faa2a10d7f15e063a5addc1ec79cbb107c2a6e56d447e92cba2c"} Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.757963 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rxcdd" event={"ID":"59f360d4-96d6-4693-a5f8-2473c4d55eca","Type":"ContainerStarted","Data":"6fcf9a609aefe11b8ea854525df3c237783b591f8a3077ae5c957caa5dc04c3b"} Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.757991 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-cb17-account-create-update-98rqh" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.758024 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5g82r" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.758040 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-mxvmb" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.758067 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9550-account-create-update-x8bqm" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.821265 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" podStartSLOduration=8.821237298 podStartE2EDuration="8.821237298s" podCreationTimestamp="2026-03-21 09:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:17:46.773884184 +0000 UTC m=+1170.369082453" watchObservedRunningTime="2026-03-21 09:17:46.821237298 +0000 UTC m=+1170.416435577" Mar 21 09:17:46 crc kubenswrapper[4932]: I0321 09:17:46.841277 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rxcdd" podStartSLOduration=5.266400638 podStartE2EDuration="7.841255285s" podCreationTimestamp="2026-03-21 09:17:39 +0000 UTC" firstStartedPulling="2026-03-21 09:17:43.067321681 +0000 UTC m=+1166.662519950" lastFinishedPulling="2026-03-21 09:17:45.642176318 +0000 UTC m=+1169.237374597" observedRunningTime="2026-03-21 09:17:46.806096274 +0000 UTC m=+1170.401294553" watchObservedRunningTime="2026-03-21 09:17:46.841255285 +0000 UTC m=+1170.436453564" Mar 21 09:17:47 crc kubenswrapper[4932]: I0321 09:17:47.483182 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:47 crc kubenswrapper[4932]: E0321 09:17:47.483381 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 21 09:17:47 crc kubenswrapper[4932]: E0321 09:17:47.483401 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 21 09:17:47 crc kubenswrapper[4932]: E0321 09:17:47.483450 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift podName:a350804d-f44d-4a1c-b748-24af07a9e811 nodeName:}" failed. No retries permitted until 2026-03-21 09:17:55.483433965 +0000 UTC m=+1179.078632234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift") pod "swift-storage-0" (UID: "a350804d-f44d-4a1c-b748-24af07a9e811") : configmap "swift-ring-files" not found Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.236749 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.300700 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rgbc\" (UniqueName: \"kubernetes.io/projected/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-kube-api-access-6rgbc\") pod \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\" (UID: \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\") " Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.300859 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-operator-scripts\") pod \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\" (UID: \"1f0e44dc-9908-4a4f-bc21-18d232f11ec6\") " Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.302289 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f0e44dc-9908-4a4f-bc21-18d232f11ec6" (UID: "1f0e44dc-9908-4a4f-bc21-18d232f11ec6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.307043 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-kube-api-access-6rgbc" (OuterVolumeSpecName: "kube-api-access-6rgbc") pod "1f0e44dc-9908-4a4f-bc21-18d232f11ec6" (UID: "1f0e44dc-9908-4a4f-bc21-18d232f11ec6"). InnerVolumeSpecName "kube-api-access-6rgbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.403024 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rgbc\" (UniqueName: \"kubernetes.io/projected/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-kube-api-access-6rgbc\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.403076 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0e44dc-9908-4a4f-bc21-18d232f11ec6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.775931 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5vrt" event={"ID":"1f0e44dc-9908-4a4f-bc21-18d232f11ec6","Type":"ContainerDied","Data":"e0cd802c43dfc104082058f14830b121c7efd804d871c19b7d68e90f7f98ebd3"} Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.776206 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0cd802c43dfc104082058f14830b121c7efd804d871c19b7d68e90f7f98ebd3" Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.776004 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5vrt" Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.782617 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerStarted","Data":"63659034ab0e5e587e585ecb664d8c733e56cdd3655e8f17e0b4c0c6ccaaa621"} Mar 21 09:17:48 crc kubenswrapper[4932]: I0321 09:17:48.823838 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.782502699 podStartE2EDuration="50.823817878s" podCreationTimestamp="2026-03-21 09:16:58 +0000 UTC" firstStartedPulling="2026-03-21 09:17:18.044900525 +0000 UTC m=+1141.640098794" lastFinishedPulling="2026-03-21 09:17:48.086215704 +0000 UTC m=+1171.681413973" observedRunningTime="2026-03-21 09:17:48.813445413 +0000 UTC m=+1172.408643702" watchObservedRunningTime="2026-03-21 09:17:48.823817878 +0000 UTC m=+1172.419016147" Mar 21 09:17:50 crc kubenswrapper[4932]: I0321 09:17:50.007696 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 21 09:17:50 crc kubenswrapper[4932]: I0321 09:17:50.802569 4932 generic.go:334] "Generic (PLEG): container finished" podID="debe0e76-6d3a-402f-af21-a3ba7ceb5a24" containerID="58f7bb97afe0e71eb26cea8d832a78b830bc6b66828c9fa1109d9223ffd96114" exitCode=0 Mar 21 09:17:50 crc kubenswrapper[4932]: I0321 09:17:50.802661 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"debe0e76-6d3a-402f-af21-a3ba7ceb5a24","Type":"ContainerDied","Data":"58f7bb97afe0e71eb26cea8d832a78b830bc6b66828c9fa1109d9223ffd96114"} Mar 21 09:17:50 crc kubenswrapper[4932]: I0321 09:17:50.808145 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d3d99e-014e-4924-827a-f3e2f87774c6" containerID="758f8c37174f2caf5319c2f9f0bb18cfba6d9aa67126a7700441d9d09e670ddb" exitCode=0 Mar 21 09:17:50 crc kubenswrapper[4932]: I0321 09:17:50.808272 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"07d3d99e-014e-4924-827a-f3e2f87774c6","Type":"ContainerDied","Data":"758f8c37174f2caf5319c2f9f0bb18cfba6d9aa67126a7700441d9d09e670ddb"} Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.193889 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-x9r28"] Mar 21 09:17:51 crc kubenswrapper[4932]: E0321 09:17:51.194946 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0e44dc-9908-4a4f-bc21-18d232f11ec6" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.194966 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0e44dc-9908-4a4f-bc21-18d232f11ec6" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: E0321 09:17:51.194993 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69b6abf-7357-49bd-8f5c-0fe8b1382238" containerName="mariadb-database-create" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195002 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69b6abf-7357-49bd-8f5c-0fe8b1382238" containerName="mariadb-database-create" Mar 21 09:17:51 crc kubenswrapper[4932]: E0321 09:17:51.195021 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041ac472-88ec-4f81-8e91-a0c80ed96a97" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195032 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="041ac472-88ec-4f81-8e91-a0c80ed96a97" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: E0321 09:17:51.195049 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0c083c-1e70-4300-b533-4adbba1989e2" containerName="mariadb-database-create" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195058 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0c083c-1e70-4300-b533-4adbba1989e2" containerName="mariadb-database-create" Mar 21 09:17:51 crc kubenswrapper[4932]: E0321 09:17:51.195080 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7962d98-3f35-4702-8035-a15b0b2223c8" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195089 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7962d98-3f35-4702-8035-a15b0b2223c8" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195369 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7962d98-3f35-4702-8035-a15b0b2223c8" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195384 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c69b6abf-7357-49bd-8f5c-0fe8b1382238" containerName="mariadb-database-create" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195395 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0c083c-1e70-4300-b533-4adbba1989e2" containerName="mariadb-database-create" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195410 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="041ac472-88ec-4f81-8e91-a0c80ed96a97" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.195429 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0e44dc-9908-4a4f-bc21-18d232f11ec6" containerName="mariadb-account-create-update" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.196300 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.198762 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pn4p2" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.198771 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.216226 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x9r28"] Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.265657 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9xr\" (UniqueName: \"kubernetes.io/projected/eff1f3bd-6d64-4d74-888b-56619d289f45-kube-api-access-sg9xr\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.265799 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-db-sync-config-data\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.265833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-combined-ca-bundle\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.265851 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-config-data\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.367645 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-db-sync-config-data\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.367958 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-combined-ca-bundle\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.368048 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-config-data\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.368175 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9xr\" (UniqueName: \"kubernetes.io/projected/eff1f3bd-6d64-4d74-888b-56619d289f45-kube-api-access-sg9xr\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.373433 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-db-sync-config-data\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.373496 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-config-data\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.376530 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-combined-ca-bundle\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.394394 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9xr\" (UniqueName: \"kubernetes.io/projected/eff1f3bd-6d64-4d74-888b-56619d289f45-kube-api-access-sg9xr\") pod \"glance-db-sync-x9r28\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:51 crc kubenswrapper[4932]: I0321 09:17:51.526029 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x9r28" Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:51.820748 4932 generic.go:334] "Generic (PLEG): container finished" podID="52bf7d16-ddac-464e-aca0-7756f5a9f696" containerID="0f8f2d414c512b789fa5243857d9b5371289822c973afa9643e18f0117903294" exitCode=0 Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:51.820812 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52bf7d16-ddac-464e-aca0-7756f5a9f696","Type":"ContainerDied","Data":"0f8f2d414c512b789fa5243857d9b5371289822c973afa9643e18f0117903294"} Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:52.069135 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-x9r28"] Mar 21 09:17:53 crc kubenswrapper[4932]: W0321 09:17:52.070401 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeff1f3bd_6d64_4d74_888b_56619d289f45.slice/crio-0484a8dafe745fac6c5b8013cf3573db849f6ca6fd780373011e069c205587b6 WatchSource:0}: Error finding container 0484a8dafe745fac6c5b8013cf3573db849f6ca6fd780373011e069c205587b6: Status 404 returned error can't find the container with id 0484a8dafe745fac6c5b8013cf3573db849f6ca6fd780373011e069c205587b6 Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:52.831595 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"07d3d99e-014e-4924-827a-f3e2f87774c6","Type":"ContainerStarted","Data":"774f082a84743c9e7d097d9986a4fa03e27580dbc9a545863664fe0ae57c88a7"} Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:52.832560 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x9r28" event={"ID":"eff1f3bd-6d64-4d74-888b-56619d289f45","Type":"ContainerStarted","Data":"0484a8dafe745fac6c5b8013cf3573db849f6ca6fd780373011e069c205587b6"} Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:52.833734 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"debe0e76-6d3a-402f-af21-a3ba7ceb5a24","Type":"ContainerStarted","Data":"99740b63bfdaada481a73f5f4a14514c33f473d6b886bc1acf15e29894b5e5b8"} Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:52.834001 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:52.856711 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371973.998081 podStartE2EDuration="1m2.856694362s" podCreationTimestamp="2026-03-21 09:16:50 +0000 UTC" firstStartedPulling="2026-03-21 09:16:52.823691074 +0000 UTC m=+1116.418889343" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:17:52.854573439 +0000 UTC m=+1176.449771758" watchObservedRunningTime="2026-03-21 09:17:52.856694362 +0000 UTC m=+1176.451892631" Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:53.170895 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:53.727983 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:53.791280 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-dvmxq"] Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:53.792128 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" podUID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" containerName="dnsmasq-dns" containerID="cri-o://d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c" gracePeriod=10 Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:53.903019 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52bf7d16-ddac-464e-aca0-7756f5a9f696","Type":"ContainerStarted","Data":"23d16d1c8edcfdbfbb63f900ddd0fc97fa3bfdf126cf570e510fbaf5d9a857e5"} Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:53.904062 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:53.904693 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:17:53 crc kubenswrapper[4932]: I0321 09:17:53.990813 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=39.648691183 podStartE2EDuration="1m2.990795038s" podCreationTimestamp="2026-03-21 09:16:51 +0000 UTC" firstStartedPulling="2026-03-21 09:16:53.71608566 +0000 UTC m=+1117.311283929" lastFinishedPulling="2026-03-21 09:17:17.058189515 +0000 UTC m=+1140.653387784" observedRunningTime="2026-03-21 09:17:53.986502719 +0000 UTC m=+1177.581701018" watchObservedRunningTime="2026-03-21 09:17:53.990795038 +0000 UTC m=+1177.585993307" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.532182 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.561895 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.930063669 podStartE2EDuration="1m3.561861249s" podCreationTimestamp="2026-03-21 09:16:51 +0000 UTC" firstStartedPulling="2026-03-21 09:16:53.622088294 +0000 UTC m=+1117.217286563" lastFinishedPulling="2026-03-21 09:17:17.253885874 +0000 UTC m=+1140.849084143" observedRunningTime="2026-03-21 09:17:54.04716706 +0000 UTC m=+1177.642365339" watchObservedRunningTime="2026-03-21 09:17:54.561861249 +0000 UTC m=+1178.157059518" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.640977 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmbqg\" (UniqueName: \"kubernetes.io/projected/67d7398f-51b2-4776-bd9a-936ba72c2d6e-kube-api-access-nmbqg\") pod \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.641104 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-dns-svc\") pod \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.641158 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-config\") pod \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.641322 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-sb\") pod \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.641475 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-nb\") pod \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\" (UID: \"67d7398f-51b2-4776-bd9a-936ba72c2d6e\") " Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.659456 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d7398f-51b2-4776-bd9a-936ba72c2d6e-kube-api-access-nmbqg" (OuterVolumeSpecName: "kube-api-access-nmbqg") pod "67d7398f-51b2-4776-bd9a-936ba72c2d6e" (UID: "67d7398f-51b2-4776-bd9a-936ba72c2d6e"). InnerVolumeSpecName "kube-api-access-nmbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.687469 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-config" (OuterVolumeSpecName: "config") pod "67d7398f-51b2-4776-bd9a-936ba72c2d6e" (UID: "67d7398f-51b2-4776-bd9a-936ba72c2d6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.689832 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67d7398f-51b2-4776-bd9a-936ba72c2d6e" (UID: "67d7398f-51b2-4776-bd9a-936ba72c2d6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.704342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67d7398f-51b2-4776-bd9a-936ba72c2d6e" (UID: "67d7398f-51b2-4776-bd9a-936ba72c2d6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.709921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67d7398f-51b2-4776-bd9a-936ba72c2d6e" (UID: "67d7398f-51b2-4776-bd9a-936ba72c2d6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.744201 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmbqg\" (UniqueName: \"kubernetes.io/projected/67d7398f-51b2-4776-bd9a-936ba72c2d6e-kube-api-access-nmbqg\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.744229 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.744239 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.744250 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.744260 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67d7398f-51b2-4776-bd9a-936ba72c2d6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.916494 4932 generic.go:334] "Generic (PLEG): container finished" podID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" containerID="d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c" exitCode=0 Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.916861 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" event={"ID":"67d7398f-51b2-4776-bd9a-936ba72c2d6e","Type":"ContainerDied","Data":"d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c"} Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.917312 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" event={"ID":"67d7398f-51b2-4776-bd9a-936ba72c2d6e","Type":"ContainerDied","Data":"1d1b9d49dd2c53012b029be9c6eace4e5fc96df865f7061069f7227dde0ae5cc"} Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.917341 4932 scope.go:117] "RemoveContainer" containerID="d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.917061 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8ff78869-dvmxq" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.963060 4932 scope.go:117] "RemoveContainer" containerID="4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15" Mar 21 09:17:54 crc kubenswrapper[4932]: I0321 09:17:54.988085 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-dvmxq"] Mar 21 09:17:55 crc kubenswrapper[4932]: I0321 09:17:55.001542 4932 scope.go:117] "RemoveContainer" containerID="d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c" Mar 21 09:17:55 crc kubenswrapper[4932]: E0321 09:17:55.002062 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c\": container with ID starting with d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c not found: ID does not exist" containerID="d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c" Mar 21 09:17:55 crc kubenswrapper[4932]: I0321 09:17:55.002102 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c"} err="failed to get container status \"d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c\": rpc error: code = NotFound desc = could not find container \"d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c\": container with ID starting with d4463764bc0f7befb8a50cc6e5651dd985ca3141424f081ef6c4867a1903ca0c not found: ID does not exist" Mar 21 09:17:55 crc kubenswrapper[4932]: I0321 09:17:55.002133 4932 scope.go:117] "RemoveContainer" containerID="4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15" Mar 21 09:17:55 crc kubenswrapper[4932]: E0321 09:17:55.002367 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15\": container with ID starting with 4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15 not found: ID does not exist" containerID="4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15" Mar 21 09:17:55 crc kubenswrapper[4932]: I0321 09:17:55.002398 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15"} err="failed to get container status \"4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15\": rpc error: code = NotFound desc = could not find container \"4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15\": container with ID starting with 4fb70ff7418694b1aa491f4cdb0d33efb0170211e33b4b43b2fb01a6f7ea6b15 not found: ID does not exist" Mar 21 09:17:55 crc kubenswrapper[4932]: I0321 09:17:55.004241 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8ff78869-dvmxq"] Mar 21 09:17:55 crc kubenswrapper[4932]: I0321 09:17:55.559628 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:17:55 crc kubenswrapper[4932]: E0321 09:17:55.559795 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 21 09:17:55 crc kubenswrapper[4932]: E0321 09:17:55.559813 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 21 09:17:55 crc kubenswrapper[4932]: E0321 09:17:55.559864 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift podName:a350804d-f44d-4a1c-b748-24af07a9e811 nodeName:}" failed. No retries permitted until 2026-03-21 09:18:11.559849416 +0000 UTC m=+1195.155047685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift") pod "swift-storage-0" (UID: "a350804d-f44d-4a1c-b748-24af07a9e811") : configmap "swift-ring-files" not found Mar 21 09:17:55 crc kubenswrapper[4932]: I0321 09:17:55.712427 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" path="/var/lib/kubelet/pods/67d7398f-51b2-4776-bd9a-936ba72c2d6e/volumes" Mar 21 09:17:56 crc kubenswrapper[4932]: I0321 09:17:56.943382 4932 generic.go:334] "Generic (PLEG): container finished" podID="59f360d4-96d6-4693-a5f8-2473c4d55eca" containerID="6fcf9a609aefe11b8ea854525df3c237783b591f8a3077ae5c957caa5dc04c3b" exitCode=0 Mar 21 09:17:56 crc kubenswrapper[4932]: I0321 09:17:56.943738 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rxcdd" event={"ID":"59f360d4-96d6-4693-a5f8-2473c4d55eca","Type":"ContainerDied","Data":"6fcf9a609aefe11b8ea854525df3c237783b591f8a3077ae5c957caa5dc04c3b"} Mar 21 09:17:56 crc kubenswrapper[4932]: I0321 09:17:56.956794 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vk8zs" podUID="86467dc0-186a-407d-b23b-5f1cc14a54ec" containerName="ovn-controller" probeResult="failure" output=< Mar 21 09:17:56 crc kubenswrapper[4932]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 21 09:17:56 crc kubenswrapper[4932]: > Mar 21 09:17:56 crc kubenswrapper[4932]: I0321 09:17:56.975236 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:56 crc kubenswrapper[4932]: I0321 09:17:56.976891 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kdvp8" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.202117 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vk8zs-config-p278n"] Mar 21 09:17:57 crc kubenswrapper[4932]: E0321 09:17:57.204444 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" containerName="init" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.204468 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" containerName="init" Mar 21 09:17:57 crc kubenswrapper[4932]: E0321 09:17:57.204487 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" containerName="dnsmasq-dns" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.204495 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" containerName="dnsmasq-dns" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.204678 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d7398f-51b2-4776-bd9a-936ba72c2d6e" containerName="dnsmasq-dns" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.205431 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.216668 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.239106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vk8zs-config-p278n"] Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.289858 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run-ovn\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.289906 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-additional-scripts\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.290031 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.290051 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-log-ovn\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.290069 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9bf\" (UniqueName: \"kubernetes.io/projected/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-kube-api-access-2b9bf\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.290091 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-scripts\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.391404 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-log-ovn\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.391447 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b9bf\" (UniqueName: \"kubernetes.io/projected/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-kube-api-access-2b9bf\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.391465 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.391491 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-scripts\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.391541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run-ovn\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.391569 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-additional-scripts\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.392095 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-log-ovn\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.392119 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.392262 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run-ovn\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.392466 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-additional-scripts\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.394243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-scripts\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.427183 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b9bf\" (UniqueName: \"kubernetes.io/projected/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-kube-api-access-2b9bf\") pod \"ovn-controller-vk8zs-config-p278n\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:57 crc kubenswrapper[4932]: I0321 09:17:57.535148 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.015198 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vk8zs-config-p278n"] Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.277170 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.307263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-combined-ca-bundle\") pod \"59f360d4-96d6-4693-a5f8-2473c4d55eca\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.307372 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjdkj\" (UniqueName: \"kubernetes.io/projected/59f360d4-96d6-4693-a5f8-2473c4d55eca-kube-api-access-zjdkj\") pod \"59f360d4-96d6-4693-a5f8-2473c4d55eca\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.307414 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-swiftconf\") pod \"59f360d4-96d6-4693-a5f8-2473c4d55eca\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.307459 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f360d4-96d6-4693-a5f8-2473c4d55eca-etc-swift\") pod \"59f360d4-96d6-4693-a5f8-2473c4d55eca\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.307485 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-dispersionconf\") pod \"59f360d4-96d6-4693-a5f8-2473c4d55eca\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.307517 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-ring-data-devices\") pod \"59f360d4-96d6-4693-a5f8-2473c4d55eca\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.307544 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-scripts\") pod \"59f360d4-96d6-4693-a5f8-2473c4d55eca\" (UID: \"59f360d4-96d6-4693-a5f8-2473c4d55eca\") " Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.314533 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "59f360d4-96d6-4693-a5f8-2473c4d55eca" (UID: "59f360d4-96d6-4693-a5f8-2473c4d55eca"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.314948 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f360d4-96d6-4693-a5f8-2473c4d55eca-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "59f360d4-96d6-4693-a5f8-2473c4d55eca" (UID: "59f360d4-96d6-4693-a5f8-2473c4d55eca"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.337838 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f360d4-96d6-4693-a5f8-2473c4d55eca-kube-api-access-zjdkj" (OuterVolumeSpecName: "kube-api-access-zjdkj") pod "59f360d4-96d6-4693-a5f8-2473c4d55eca" (UID: "59f360d4-96d6-4693-a5f8-2473c4d55eca"). InnerVolumeSpecName "kube-api-access-zjdkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.359627 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "59f360d4-96d6-4693-a5f8-2473c4d55eca" (UID: "59f360d4-96d6-4693-a5f8-2473c4d55eca"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.380012 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59f360d4-96d6-4693-a5f8-2473c4d55eca" (UID: "59f360d4-96d6-4693-a5f8-2473c4d55eca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.380627 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-scripts" (OuterVolumeSpecName: "scripts") pod "59f360d4-96d6-4693-a5f8-2473c4d55eca" (UID: "59f360d4-96d6-4693-a5f8-2473c4d55eca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.395599 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "59f360d4-96d6-4693-a5f8-2473c4d55eca" (UID: "59f360d4-96d6-4693-a5f8-2473c4d55eca"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.409318 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjdkj\" (UniqueName: \"kubernetes.io/projected/59f360d4-96d6-4693-a5f8-2473c4d55eca-kube-api-access-zjdkj\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.409379 4932 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.409392 4932 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/59f360d4-96d6-4693-a5f8-2473c4d55eca-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.409405 4932 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.409418 4932 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.409428 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59f360d4-96d6-4693-a5f8-2473c4d55eca-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.409441 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f360d4-96d6-4693-a5f8-2473c4d55eca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.960451 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rxcdd" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.961404 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rxcdd" event={"ID":"59f360d4-96d6-4693-a5f8-2473c4d55eca","Type":"ContainerDied","Data":"22ee8815ef7992df02a4cc28dfd20dd5838778b801f260b58df58e2aaa6ca6da"} Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.961460 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ee8815ef7992df02a4cc28dfd20dd5838778b801f260b58df58e2aaa6ca6da" Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.964208 4932 generic.go:334] "Generic (PLEG): container finished" podID="06341c07-2cf6-47d8-9948-e6e8e3ce39a5" containerID="e8317985e3807488c4d5152971d4e5f8f4622e87b6a26ed0178a0d398fe681bd" exitCode=0 Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.964245 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vk8zs-config-p278n" event={"ID":"06341c07-2cf6-47d8-9948-e6e8e3ce39a5","Type":"ContainerDied","Data":"e8317985e3807488c4d5152971d4e5f8f4622e87b6a26ed0178a0d398fe681bd"} Mar 21 09:17:58 crc kubenswrapper[4932]: I0321 09:17:58.964267 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vk8zs-config-p278n" event={"ID":"06341c07-2cf6-47d8-9948-e6e8e3ce39a5","Type":"ContainerStarted","Data":"0cefb5198778d07da0f27f3f1b685d8d5f01d7f4654ed85a00e0da4cf1302c39"} Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.007791 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.011341 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.156806 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568078-9kxgq"] Mar 21 09:18:00 crc kubenswrapper[4932]: E0321 09:18:00.157517 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f360d4-96d6-4693-a5f8-2473c4d55eca" containerName="swift-ring-rebalance" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.157530 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f360d4-96d6-4693-a5f8-2473c4d55eca" containerName="swift-ring-rebalance" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.157746 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f360d4-96d6-4693-a5f8-2473c4d55eca" containerName="swift-ring-rebalance" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.158397 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.161796 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.161924 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.162078 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.178674 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568078-9kxgq"] Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.344153 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5jfw\" (UniqueName: \"kubernetes.io/projected/27dadb54-a5a3-4fab-978e-0453dc63539f-kube-api-access-z5jfw\") pod \"auto-csr-approver-29568078-9kxgq\" (UID: \"27dadb54-a5a3-4fab-978e-0453dc63539f\") " pod="openshift-infra/auto-csr-approver-29568078-9kxgq" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.445823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5jfw\" (UniqueName: \"kubernetes.io/projected/27dadb54-a5a3-4fab-978e-0453dc63539f-kube-api-access-z5jfw\") pod \"auto-csr-approver-29568078-9kxgq\" (UID: \"27dadb54-a5a3-4fab-978e-0453dc63539f\") " pod="openshift-infra/auto-csr-approver-29568078-9kxgq" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.475842 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5jfw\" (UniqueName: \"kubernetes.io/projected/27dadb54-a5a3-4fab-978e-0453dc63539f-kube-api-access-z5jfw\") pod \"auto-csr-approver-29568078-9kxgq\" (UID: \"27dadb54-a5a3-4fab-978e-0453dc63539f\") " pod="openshift-infra/auto-csr-approver-29568078-9kxgq" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.479680 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" Mar 21 09:18:00 crc kubenswrapper[4932]: I0321 09:18:00.981876 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:01 crc kubenswrapper[4932]: I0321 09:18:01.943363 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vk8zs" Mar 21 09:18:02 crc kubenswrapper[4932]: I0321 09:18:02.127684 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="debe0e76-6d3a-402f-af21-a3ba7ceb5a24" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Mar 21 09:18:02 crc kubenswrapper[4932]: I0321 09:18:02.967298 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="07d3d99e-014e-4924-827a-f3e2f87774c6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Mar 21 09:18:04 crc kubenswrapper[4932]: I0321 09:18:04.402125 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:18:04 crc kubenswrapper[4932]: I0321 09:18:04.402396 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="prometheus" containerID="cri-o://3a06633c76e1f2704b917e24aa79245a033df4c4760636974cda10b880511129" gracePeriod=600 Mar 21 09:18:04 crc kubenswrapper[4932]: I0321 09:18:04.402474 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="thanos-sidecar" containerID="cri-o://63659034ab0e5e587e585ecb664d8c733e56cdd3655e8f17e0b4c0c6ccaaa621" gracePeriod=600 Mar 21 09:18:04 crc kubenswrapper[4932]: I0321 09:18:04.402512 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="config-reloader" containerID="cri-o://c96bc826f8d211eb20fcc50df225774b214ec8f2952ac456d9c986a813a944a0" gracePeriod=600 Mar 21 09:18:05 crc kubenswrapper[4932]: I0321 09:18:05.008389 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.116:9090/-/ready\": dial tcp 10.217.0.116:9090: connect: connection refused" Mar 21 09:18:05 crc kubenswrapper[4932]: I0321 09:18:05.016697 4932 generic.go:334] "Generic (PLEG): container finished" podID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerID="63659034ab0e5e587e585ecb664d8c733e56cdd3655e8f17e0b4c0c6ccaaa621" exitCode=0 Mar 21 09:18:05 crc kubenswrapper[4932]: I0321 09:18:05.016728 4932 generic.go:334] "Generic (PLEG): container finished" podID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerID="c96bc826f8d211eb20fcc50df225774b214ec8f2952ac456d9c986a813a944a0" exitCode=0 Mar 21 09:18:05 crc kubenswrapper[4932]: I0321 09:18:05.016738 4932 generic.go:334] "Generic (PLEG): container finished" podID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerID="3a06633c76e1f2704b917e24aa79245a033df4c4760636974cda10b880511129" exitCode=0 Mar 21 09:18:05 crc kubenswrapper[4932]: I0321 09:18:05.016758 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerDied","Data":"63659034ab0e5e587e585ecb664d8c733e56cdd3655e8f17e0b4c0c6ccaaa621"} Mar 21 09:18:05 crc kubenswrapper[4932]: I0321 09:18:05.016786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerDied","Data":"c96bc826f8d211eb20fcc50df225774b214ec8f2952ac456d9c986a813a944a0"} Mar 21 09:18:05 crc kubenswrapper[4932]: I0321 09:18:05.016797 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerDied","Data":"3a06633c76e1f2704b917e24aa79245a033df4c4760636974cda10b880511129"} Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.273139 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.370078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run-ovn\") pod \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.370215 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-additional-scripts\") pod \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.370234 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run\") pod \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.370287 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b9bf\" (UniqueName: \"kubernetes.io/projected/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-kube-api-access-2b9bf\") pod \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.370330 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-log-ovn\") pod \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.370384 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run" (OuterVolumeSpecName: "var-run") pod "06341c07-2cf6-47d8-9948-e6e8e3ce39a5" (UID: "06341c07-2cf6-47d8-9948-e6e8e3ce39a5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.370412 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-scripts\") pod \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\" (UID: \"06341c07-2cf6-47d8-9948-e6e8e3ce39a5\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.371074 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "06341c07-2cf6-47d8-9948-e6e8e3ce39a5" (UID: "06341c07-2cf6-47d8-9948-e6e8e3ce39a5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.371190 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "06341c07-2cf6-47d8-9948-e6e8e3ce39a5" (UID: "06341c07-2cf6-47d8-9948-e6e8e3ce39a5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.371232 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "06341c07-2cf6-47d8-9948-e6e8e3ce39a5" (UID: "06341c07-2cf6-47d8-9948-e6e8e3ce39a5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.371308 4932 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.371328 4932 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.371359 4932 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.371489 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-scripts" (OuterVolumeSpecName: "scripts") pod "06341c07-2cf6-47d8-9948-e6e8e3ce39a5" (UID: "06341c07-2cf6-47d8-9948-e6e8e3ce39a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.377647 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-kube-api-access-2b9bf" (OuterVolumeSpecName: "kube-api-access-2b9bf") pod "06341c07-2cf6-47d8-9948-e6e8e3ce39a5" (UID: "06341c07-2cf6-47d8-9948-e6e8e3ce39a5"). InnerVolumeSpecName "kube-api-access-2b9bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.405569 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.474435 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b9bf\" (UniqueName: \"kubernetes.io/projected/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-kube-api-access-2b9bf\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.474474 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.474483 4932 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06341c07-2cf6-47d8-9948-e6e8e3ce39a5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.575748 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74tf2\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-kube-api-access-74tf2\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.575901 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.575985 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec155900-6777-4362-8c9c-ea98a8e245a8-config-out\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.576030 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-web-config\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.576067 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-config\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.576094 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-thanos-prometheus-http-client-file\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.576127 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-tls-assets\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.576161 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-1\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.576230 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-2\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.576271 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-0\") pod \"ec155900-6777-4362-8c9c-ea98a8e245a8\" (UID: \"ec155900-6777-4362-8c9c-ea98a8e245a8\") " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.577100 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.577451 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.578339 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.582450 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.582503 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-config" (OuterVolumeSpecName: "config") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.582573 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-kube-api-access-74tf2" (OuterVolumeSpecName: "kube-api-access-74tf2") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "kube-api-access-74tf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.588157 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec155900-6777-4362-8c9c-ea98a8e245a8-config-out" (OuterVolumeSpecName: "config-out") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.588539 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.606187 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.618483 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-web-config" (OuterVolumeSpecName: "web-config") pod "ec155900-6777-4362-8c9c-ea98a8e245a8" (UID: "ec155900-6777-4362-8c9c-ea98a8e245a8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.641090 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568078-9kxgq"] Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.651631 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678523 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") on node \"crc\" " Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678799 4932 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec155900-6777-4362-8c9c-ea98a8e245a8-config-out\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678811 4932 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-web-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678821 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678830 4932 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ec155900-6777-4362-8c9c-ea98a8e245a8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678843 4932 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678855 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678865 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678874 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ec155900-6777-4362-8c9c-ea98a8e245a8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.678882 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74tf2\" (UniqueName: \"kubernetes.io/projected/ec155900-6777-4362-8c9c-ea98a8e245a8-kube-api-access-74tf2\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.709031 4932 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.709169 4932 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f") on node "crc" Mar 21 09:18:07 crc kubenswrapper[4932]: I0321 09:18:07.780790 4932 reconciler_common.go:293] "Volume detached for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.046800 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" event={"ID":"27dadb54-a5a3-4fab-978e-0453dc63539f","Type":"ContainerStarted","Data":"6408ebdf5965b08a03fa213c9ebd52a4bc82b0d3b0cef4c0d4204b0c4c57573d"} Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.048777 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vk8zs-config-p278n" event={"ID":"06341c07-2cf6-47d8-9948-e6e8e3ce39a5","Type":"ContainerDied","Data":"0cefb5198778d07da0f27f3f1b685d8d5f01d7f4654ed85a00e0da4cf1302c39"} Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.048853 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cefb5198778d07da0f27f3f1b685d8d5f01d7f4654ed85a00e0da4cf1302c39" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.048798 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vk8zs-config-p278n" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.052122 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec155900-6777-4362-8c9c-ea98a8e245a8","Type":"ContainerDied","Data":"ad11e0063b0cbffb4e02a9bbd237b5df2d971ebeb3471998749a75a995054f16"} Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.052209 4932 scope.go:117] "RemoveContainer" containerID="63659034ab0e5e587e585ecb664d8c733e56cdd3655e8f17e0b4c0c6ccaaa621" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.052155 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.055804 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x9r28" event={"ID":"eff1f3bd-6d64-4d74-888b-56619d289f45","Type":"ContainerStarted","Data":"6dfe4246b75985df7887f95b45a10964c4ad72834fb7b91a798d97e884f4b070"} Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.076977 4932 scope.go:117] "RemoveContainer" containerID="c96bc826f8d211eb20fcc50df225774b214ec8f2952ac456d9c986a813a944a0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.093002 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-x9r28" podStartSLOduration=1.958626411 podStartE2EDuration="17.091637254s" podCreationTimestamp="2026-03-21 09:17:51 +0000 UTC" firstStartedPulling="2026-03-21 09:17:52.072589833 +0000 UTC m=+1175.667788102" lastFinishedPulling="2026-03-21 09:18:07.205600676 +0000 UTC m=+1190.800798945" observedRunningTime="2026-03-21 09:18:08.090286376 +0000 UTC m=+1191.685484655" watchObservedRunningTime="2026-03-21 09:18:08.091637254 +0000 UTC m=+1191.686835523" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.121961 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.136630 4932 scope.go:117] "RemoveContainer" containerID="3a06633c76e1f2704b917e24aa79245a033df4c4760636974cda10b880511129" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.139101 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.160627 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:18:08 crc kubenswrapper[4932]: E0321 09:18:08.160968 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="config-reloader" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.160984 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="config-reloader" Mar 21 09:18:08 crc kubenswrapper[4932]: E0321 09:18:08.160993 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="init-config-reloader" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.161000 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="init-config-reloader" Mar 21 09:18:08 crc kubenswrapper[4932]: E0321 09:18:08.161013 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06341c07-2cf6-47d8-9948-e6e8e3ce39a5" containerName="ovn-config" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.161020 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="06341c07-2cf6-47d8-9948-e6e8e3ce39a5" containerName="ovn-config" Mar 21 09:18:08 crc kubenswrapper[4932]: E0321 09:18:08.161039 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="prometheus" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.161044 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="prometheus" Mar 21 09:18:08 crc kubenswrapper[4932]: E0321 09:18:08.161055 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="thanos-sidecar" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.161061 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="thanos-sidecar" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.161213 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="thanos-sidecar" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.161228 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="06341c07-2cf6-47d8-9948-e6e8e3ce39a5" containerName="ovn-config" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.161238 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="prometheus" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.161245 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" containerName="config-reloader" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.163203 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.174930 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.175633 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.176074 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.177163 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.177433 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.177455 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.177661 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.177845 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7sg4x" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.179518 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.189220 4932 scope.go:117] "RemoveContainer" containerID="b3f7a9ebf09c962bfc77cd713a69c5b6eab7ea38596e4511c70ad60d57541524" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.219922 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289327 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289398 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289435 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289467 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289517 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c8728f16-950d-456c-865c-87365e4bc418-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289596 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-config\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289695 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66n92\" (UniqueName: \"kubernetes.io/projected/c8728f16-950d-456c-865c-87365e4bc418-kube-api-access-66n92\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289741 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c8728f16-950d-456c-865c-87365e4bc418-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289777 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289832 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.289967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.290101 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.388843 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vk8zs-config-p278n"] Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392590 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392651 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392716 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392795 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392833 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392855 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392881 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.392969 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c8728f16-950d-456c-865c-87365e4bc418-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.393004 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-config\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.393052 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66n92\" (UniqueName: \"kubernetes.io/projected/c8728f16-950d-456c-865c-87365e4bc418-kube-api-access-66n92\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.393098 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c8728f16-950d-456c-865c-87365e4bc418-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.394717 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.395278 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.395494 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c8728f16-950d-456c-865c-87365e4bc418-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.400197 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.401201 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.401863 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-config\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.402722 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.402756 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c8728f16-950d-456c-865c-87365e4bc418-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.403185 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vk8zs-config-p278n"] Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.406205 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.406495 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c8728f16-950d-456c-865c-87365e4bc418-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.410068 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c8728f16-950d-456c-865c-87365e4bc418-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.410569 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.410619 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2fe0a73783cbe795f5f78fb4762619a3b18dc91982a9d49dcd3d68ffc16f7f99/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.419779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66n92\" (UniqueName: \"kubernetes.io/projected/c8728f16-950d-456c-865c-87365e4bc418-kube-api-access-66n92\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.451549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb575f16-5543-4275-ab9d-5e9abc29d24f\") pod \"prometheus-metric-storage-0\" (UID: \"c8728f16-950d-456c-865c-87365e4bc418\") " pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.499219 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:08 crc kubenswrapper[4932]: W0321 09:18:08.970666 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8728f16_950d_456c_865c_87365e4bc418.slice/crio-5b4451cd5174b998b341a711739eb20005d128f94a6d493cbae0bc9e1dabff33 WatchSource:0}: Error finding container 5b4451cd5174b998b341a711739eb20005d128f94a6d493cbae0bc9e1dabff33: Status 404 returned error can't find the container with id 5b4451cd5174b998b341a711739eb20005d128f94a6d493cbae0bc9e1dabff33 Mar 21 09:18:08 crc kubenswrapper[4932]: I0321 09:18:08.973741 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 21 09:18:09 crc kubenswrapper[4932]: I0321 09:18:09.067553 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" event={"ID":"27dadb54-a5a3-4fab-978e-0453dc63539f","Type":"ContainerStarted","Data":"590c7745cc01e3d5ed8589f6da0bd35cefde7190849e23121d45b81c4a286f6b"} Mar 21 09:18:09 crc kubenswrapper[4932]: I0321 09:18:09.069641 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c8728f16-950d-456c-865c-87365e4bc418","Type":"ContainerStarted","Data":"5b4451cd5174b998b341a711739eb20005d128f94a6d493cbae0bc9e1dabff33"} Mar 21 09:18:09 crc kubenswrapper[4932]: I0321 09:18:09.101039 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" podStartSLOduration=8.059579218 podStartE2EDuration="9.101018577s" podCreationTimestamp="2026-03-21 09:18:00 +0000 UTC" firstStartedPulling="2026-03-21 09:18:07.651445214 +0000 UTC m=+1191.246643483" lastFinishedPulling="2026-03-21 09:18:08.692884563 +0000 UTC m=+1192.288082842" observedRunningTime="2026-03-21 09:18:09.092776516 +0000 UTC m=+1192.687974785" watchObservedRunningTime="2026-03-21 09:18:09.101018577 +0000 UTC m=+1192.696216846" Mar 21 09:18:09 crc kubenswrapper[4932]: I0321 09:18:09.714556 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06341c07-2cf6-47d8-9948-e6e8e3ce39a5" path="/var/lib/kubelet/pods/06341c07-2cf6-47d8-9948-e6e8e3ce39a5/volumes" Mar 21 09:18:09 crc kubenswrapper[4932]: I0321 09:18:09.715946 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec155900-6777-4362-8c9c-ea98a8e245a8" path="/var/lib/kubelet/pods/ec155900-6777-4362-8c9c-ea98a8e245a8/volumes" Mar 21 09:18:10 crc kubenswrapper[4932]: I0321 09:18:10.077995 4932 generic.go:334] "Generic (PLEG): container finished" podID="27dadb54-a5a3-4fab-978e-0453dc63539f" containerID="590c7745cc01e3d5ed8589f6da0bd35cefde7190849e23121d45b81c4a286f6b" exitCode=0 Mar 21 09:18:10 crc kubenswrapper[4932]: I0321 09:18:10.078040 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" event={"ID":"27dadb54-a5a3-4fab-978e-0453dc63539f","Type":"ContainerDied","Data":"590c7745cc01e3d5ed8589f6da0bd35cefde7190849e23121d45b81c4a286f6b"} Mar 21 09:18:11 crc kubenswrapper[4932]: I0321 09:18:11.468233 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" Mar 21 09:18:11 crc kubenswrapper[4932]: I0321 09:18:11.551647 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5jfw\" (UniqueName: \"kubernetes.io/projected/27dadb54-a5a3-4fab-978e-0453dc63539f-kube-api-access-z5jfw\") pod \"27dadb54-a5a3-4fab-978e-0453dc63539f\" (UID: \"27dadb54-a5a3-4fab-978e-0453dc63539f\") " Mar 21 09:18:11 crc kubenswrapper[4932]: I0321 09:18:11.557904 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27dadb54-a5a3-4fab-978e-0453dc63539f-kube-api-access-z5jfw" (OuterVolumeSpecName: "kube-api-access-z5jfw") pod "27dadb54-a5a3-4fab-978e-0453dc63539f" (UID: "27dadb54-a5a3-4fab-978e-0453dc63539f"). InnerVolumeSpecName "kube-api-access-z5jfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:11 crc kubenswrapper[4932]: I0321 09:18:11.653970 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:18:11 crc kubenswrapper[4932]: I0321 09:18:11.654195 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5jfw\" (UniqueName: \"kubernetes.io/projected/27dadb54-a5a3-4fab-978e-0453dc63539f-kube-api-access-z5jfw\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:11 crc kubenswrapper[4932]: I0321 09:18:11.659993 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a350804d-f44d-4a1c-b748-24af07a9e811-etc-swift\") pod \"swift-storage-0\" (UID: \"a350804d-f44d-4a1c-b748-24af07a9e811\") " pod="openstack/swift-storage-0" Mar 21 09:18:11 crc kubenswrapper[4932]: I0321 09:18:11.814739 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.094889 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" event={"ID":"27dadb54-a5a3-4fab-978e-0453dc63539f","Type":"ContainerDied","Data":"6408ebdf5965b08a03fa213c9ebd52a4bc82b0d3b0cef4c0d4204b0c4c57573d"} Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.095259 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6408ebdf5965b08a03fa213c9ebd52a4bc82b0d3b0cef4c0d4204b0c4c57573d" Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.095331 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568078-9kxgq" Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.097579 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c8728f16-950d-456c-865c-87365e4bc418","Type":"ContainerStarted","Data":"5cb10cf515fa51b64ad7fd7c9b7e2b567270966feb4ee11e9190d5d3cde81eb3"} Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.124889 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="debe0e76-6d3a-402f-af21-a3ba7ceb5a24" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.163817 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568072-4bx56"] Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.170572 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568072-4bx56"] Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.332700 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.886185 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="52bf7d16-ddac-464e-aca0-7756f5a9f696" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Mar 21 09:18:12 crc kubenswrapper[4932]: I0321 09:18:12.965854 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="07d3d99e-014e-4924-827a-f3e2f87774c6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Mar 21 09:18:13 crc kubenswrapper[4932]: I0321 09:18:13.106208 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"1bee89c9acc30887871794f78623fd007f7b26b81370b7e0ffff551bc5139fe5"} Mar 21 09:18:13 crc kubenswrapper[4932]: I0321 09:18:13.106569 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"6d21d570b425c9e9fec40089c14f56934100efd4ca4358c320d911198f7cfa51"} Mar 21 09:18:13 crc kubenswrapper[4932]: I0321 09:18:13.712007 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4099c5d2-c6c7-40c3-b462-b67e970eb8ed" path="/var/lib/kubelet/pods/4099c5d2-c6c7-40c3-b462-b67e970eb8ed/volumes" Mar 21 09:18:14 crc kubenswrapper[4932]: I0321 09:18:14.121189 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"94b35cde64bbb824b50742546da54932cdacb53380271faa2c30915fc374b8d4"} Mar 21 09:18:14 crc kubenswrapper[4932]: I0321 09:18:14.121240 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"3d4a1b3b97dee4d2772e020fa56c8cae9f75505e9f9ee3db3faa89c02c822791"} Mar 21 09:18:14 crc kubenswrapper[4932]: I0321 09:18:14.121256 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"450efca5a9767407a4c46437e4090d924d2faa21ec06b3da6090a2e00305320f"} Mar 21 09:18:15 crc kubenswrapper[4932]: I0321 09:18:15.136921 4932 generic.go:334] "Generic (PLEG): container finished" podID="eff1f3bd-6d64-4d74-888b-56619d289f45" containerID="6dfe4246b75985df7887f95b45a10964c4ad72834fb7b91a798d97e884f4b070" exitCode=0 Mar 21 09:18:15 crc kubenswrapper[4932]: I0321 09:18:15.137026 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x9r28" event={"ID":"eff1f3bd-6d64-4d74-888b-56619d289f45","Type":"ContainerDied","Data":"6dfe4246b75985df7887f95b45a10964c4ad72834fb7b91a798d97e884f4b070"} Mar 21 09:18:15 crc kubenswrapper[4932]: I0321 09:18:15.143027 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"35fff8133eb1ae221890ff6baf524cfb9ff49ccdb34c69c4821e473e2861af0d"} Mar 21 09:18:15 crc kubenswrapper[4932]: I0321 09:18:15.143090 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"1ef0b2754c29890d39669dd71d073fef73e5132f1dcf6fea576f46f702d71a59"} Mar 21 09:18:15 crc kubenswrapper[4932]: I0321 09:18:15.143108 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"84fa042b513ce2440c19a3e439d3ec49ea7fbd8e44b4172a80448895203c7b74"} Mar 21 09:18:15 crc kubenswrapper[4932]: I0321 09:18:15.143124 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"b106301550ac409d6569043b58aa73a15c901124d96ebbf0db19953243681bcf"} Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.166724 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"6ec0f96cd8e4caa74066176dc9dccc2800314dc603f86645ed8f6c39fea996bf"} Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.167150 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"20b88f6c854eb8b9e7cde00fd36525be2fc4d0a71bda261c4a76000a97174c95"} Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.530837 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x9r28" Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.640235 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg9xr\" (UniqueName: \"kubernetes.io/projected/eff1f3bd-6d64-4d74-888b-56619d289f45-kube-api-access-sg9xr\") pod \"eff1f3bd-6d64-4d74-888b-56619d289f45\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.640296 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-combined-ca-bundle\") pod \"eff1f3bd-6d64-4d74-888b-56619d289f45\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.640416 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-db-sync-config-data\") pod \"eff1f3bd-6d64-4d74-888b-56619d289f45\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.640591 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-config-data\") pod \"eff1f3bd-6d64-4d74-888b-56619d289f45\" (UID: \"eff1f3bd-6d64-4d74-888b-56619d289f45\") " Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.644797 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff1f3bd-6d64-4d74-888b-56619d289f45-kube-api-access-sg9xr" (OuterVolumeSpecName: "kube-api-access-sg9xr") pod "eff1f3bd-6d64-4d74-888b-56619d289f45" (UID: "eff1f3bd-6d64-4d74-888b-56619d289f45"). InnerVolumeSpecName "kube-api-access-sg9xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.645231 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eff1f3bd-6d64-4d74-888b-56619d289f45" (UID: "eff1f3bd-6d64-4d74-888b-56619d289f45"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.665416 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eff1f3bd-6d64-4d74-888b-56619d289f45" (UID: "eff1f3bd-6d64-4d74-888b-56619d289f45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.684710 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-config-data" (OuterVolumeSpecName: "config-data") pod "eff1f3bd-6d64-4d74-888b-56619d289f45" (UID: "eff1f3bd-6d64-4d74-888b-56619d289f45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.742385 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.742417 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg9xr\" (UniqueName: \"kubernetes.io/projected/eff1f3bd-6d64-4d74-888b-56619d289f45-kube-api-access-sg9xr\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.742428 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:16 crc kubenswrapper[4932]: I0321 09:18:16.742438 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eff1f3bd-6d64-4d74-888b-56619d289f45-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.183472 4932 generic.go:334] "Generic (PLEG): container finished" podID="c8728f16-950d-456c-865c-87365e4bc418" containerID="5cb10cf515fa51b64ad7fd7c9b7e2b567270966feb4ee11e9190d5d3cde81eb3" exitCode=0 Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.183941 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c8728f16-950d-456c-865c-87365e4bc418","Type":"ContainerDied","Data":"5cb10cf515fa51b64ad7fd7c9b7e2b567270966feb4ee11e9190d5d3cde81eb3"} Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.192811 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-x9r28" event={"ID":"eff1f3bd-6d64-4d74-888b-56619d289f45","Type":"ContainerDied","Data":"0484a8dafe745fac6c5b8013cf3573db849f6ca6fd780373011e069c205587b6"} Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.192850 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0484a8dafe745fac6c5b8013cf3573db849f6ca6fd780373011e069c205587b6" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.192911 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-x9r28" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.200796 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"bca330650736b808204c37a42315570b5f547e165f3de3518e1540b823f5c1a3"} Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.200849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"6d95796944b392dd93be1d3fa2cad5a0f2a65a308c1fe08ecd5ae70408070347"} Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.200862 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"6445215f8673d6e4da1a3619d5b133b3b5bfa4edc74f9d24ff251f3834ce95e7"} Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.580292 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6558dc855c-48pzz"] Mar 21 09:18:17 crc kubenswrapper[4932]: E0321 09:18:17.580675 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff1f3bd-6d64-4d74-888b-56619d289f45" containerName="glance-db-sync" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.580691 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff1f3bd-6d64-4d74-888b-56619d289f45" containerName="glance-db-sync" Mar 21 09:18:17 crc kubenswrapper[4932]: E0321 09:18:17.580711 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27dadb54-a5a3-4fab-978e-0453dc63539f" containerName="oc" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.580718 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="27dadb54-a5a3-4fab-978e-0453dc63539f" containerName="oc" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.580889 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff1f3bd-6d64-4d74-888b-56619d289f45" containerName="glance-db-sync" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.580906 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="27dadb54-a5a3-4fab-978e-0453dc63539f" containerName="oc" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.581831 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.626420 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6558dc855c-48pzz"] Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.661169 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdtm\" (UniqueName: \"kubernetes.io/projected/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-kube-api-access-qgdtm\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.661236 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-dns-svc\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.661301 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-sb\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.661360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-config\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.661409 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-nb\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.762555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-dns-svc\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.762669 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-sb\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.762753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-config\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.762789 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-nb\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.762876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgdtm\" (UniqueName: \"kubernetes.io/projected/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-kube-api-access-qgdtm\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.764910 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-dns-svc\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.765730 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-sb\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.767184 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-nb\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.767644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-config\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.799147 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgdtm\" (UniqueName: \"kubernetes.io/projected/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-kube-api-access-qgdtm\") pod \"dnsmasq-dns-6558dc855c-48pzz\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:17 crc kubenswrapper[4932]: I0321 09:18:17.898965 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.215578 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c8728f16-950d-456c-865c-87365e4bc418","Type":"ContainerStarted","Data":"fe79b14b585af14a79a8cb0ec4cb6762a4731fbc1560abd7075afca8f0e36c36"} Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.222209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"c59bee2f87fee849aea32d30f85a5cf5dc70478ce05af7920fe151a8b0e469dd"} Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.222247 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a350804d-f44d-4a1c-b748-24af07a9e811","Type":"ContainerStarted","Data":"c7912feacad34128c68395fa2fe3ea3a9f8ab4215ad41dd68508cd7bd150c281"} Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.266650 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.876236552 podStartE2EDuration="40.266635531s" podCreationTimestamp="2026-03-21 09:17:38 +0000 UTC" firstStartedPulling="2026-03-21 09:18:12.33716284 +0000 UTC m=+1195.932361109" lastFinishedPulling="2026-03-21 09:18:15.727561799 +0000 UTC m=+1199.322760088" observedRunningTime="2026-03-21 09:18:18.264683271 +0000 UTC m=+1201.859881540" watchObservedRunningTime="2026-03-21 09:18:18.266635531 +0000 UTC m=+1201.861833800" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.383181 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6558dc855c-48pzz"] Mar 21 09:18:18 crc kubenswrapper[4932]: W0321 09:18:18.385555 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4b136cc_b72d_40d0_bf33_d7edb2b6a514.slice/crio-93f0f9869ced1826b13541bde81e8bc3785b75fe7e1418d6f68f4f5c13fbd6ff WatchSource:0}: Error finding container 93f0f9869ced1826b13541bde81e8bc3785b75fe7e1418d6f68f4f5c13fbd6ff: Status 404 returned error can't find the container with id 93f0f9869ced1826b13541bde81e8bc3785b75fe7e1418d6f68f4f5c13fbd6ff Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.537844 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6558dc855c-48pzz"] Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.571468 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6bbf886c-qvkdt"] Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.573283 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.574786 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.581753 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6bbf886c-qvkdt"] Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.680470 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2s2\" (UniqueName: \"kubernetes.io/projected/699d7b1e-6190-4000-8035-0a2c288a53f7-kube-api-access-9t2s2\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.680708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-swift-storage-0\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.680826 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.680967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.681059 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-config\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.681261 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-svc\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.782584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.782630 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.782655 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-config\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.782716 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-svc\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.782771 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t2s2\" (UniqueName: \"kubernetes.io/projected/699d7b1e-6190-4000-8035-0a2c288a53f7-kube-api-access-9t2s2\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.782801 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-swift-storage-0\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.783472 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.783858 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-config\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.784050 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.784285 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-swift-storage-0\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.784516 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-svc\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.801809 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t2s2\" (UniqueName: \"kubernetes.io/projected/699d7b1e-6190-4000-8035-0a2c288a53f7-kube-api-access-9t2s2\") pod \"dnsmasq-dns-7c6bbf886c-qvkdt\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:18 crc kubenswrapper[4932]: I0321 09:18:18.898160 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.232883 4932 generic.go:334] "Generic (PLEG): container finished" podID="a4b136cc-b72d-40d0-bf33-d7edb2b6a514" containerID="03ca4fbc9aad7b995c9b73e8a22247ec7d378b8cd3a7385178a302ed1389d2a3" exitCode=0 Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.232942 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6558dc855c-48pzz" event={"ID":"a4b136cc-b72d-40d0-bf33-d7edb2b6a514","Type":"ContainerDied","Data":"03ca4fbc9aad7b995c9b73e8a22247ec7d378b8cd3a7385178a302ed1389d2a3"} Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.233238 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6558dc855c-48pzz" event={"ID":"a4b136cc-b72d-40d0-bf33-d7edb2b6a514","Type":"ContainerStarted","Data":"93f0f9869ced1826b13541bde81e8bc3785b75fe7e1418d6f68f4f5c13fbd6ff"} Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.344292 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6bbf886c-qvkdt"] Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.639889 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.695829 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-dns-svc\") pod \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.696029 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-config\") pod \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.696065 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-nb\") pod \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.696570 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-sb\") pod \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.696617 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgdtm\" (UniqueName: \"kubernetes.io/projected/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-kube-api-access-qgdtm\") pod \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\" (UID: \"a4b136cc-b72d-40d0-bf33-d7edb2b6a514\") " Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.708889 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-kube-api-access-qgdtm" (OuterVolumeSpecName: "kube-api-access-qgdtm") pod "a4b136cc-b72d-40d0-bf33-d7edb2b6a514" (UID: "a4b136cc-b72d-40d0-bf33-d7edb2b6a514"). InnerVolumeSpecName "kube-api-access-qgdtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.719948 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4b136cc-b72d-40d0-bf33-d7edb2b6a514" (UID: "a4b136cc-b72d-40d0-bf33-d7edb2b6a514"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.720615 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-config" (OuterVolumeSpecName: "config") pod "a4b136cc-b72d-40d0-bf33-d7edb2b6a514" (UID: "a4b136cc-b72d-40d0-bf33-d7edb2b6a514"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.721295 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4b136cc-b72d-40d0-bf33-d7edb2b6a514" (UID: "a4b136cc-b72d-40d0-bf33-d7edb2b6a514"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.721996 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4b136cc-b72d-40d0-bf33-d7edb2b6a514" (UID: "a4b136cc-b72d-40d0-bf33-d7edb2b6a514"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.799471 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.799518 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.799533 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.799545 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgdtm\" (UniqueName: \"kubernetes.io/projected/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-kube-api-access-qgdtm\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:19 crc kubenswrapper[4932]: I0321 09:18:19.799556 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b136cc-b72d-40d0-bf33-d7edb2b6a514-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.246043 4932 generic.go:334] "Generic (PLEG): container finished" podID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerID="f192b740d7f25bfce99b428d3272fcae81ec637ea301812b7017ffb5a463bd87" exitCode=0 Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.246108 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" event={"ID":"699d7b1e-6190-4000-8035-0a2c288a53f7","Type":"ContainerDied","Data":"f192b740d7f25bfce99b428d3272fcae81ec637ea301812b7017ffb5a463bd87"} Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.246158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" event={"ID":"699d7b1e-6190-4000-8035-0a2c288a53f7","Type":"ContainerStarted","Data":"c8048acfb9f7f331e50fc9ed6ba544e418e65a1944bba718208e412afdfc4fd4"} Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.250265 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6558dc855c-48pzz" Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.250274 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6558dc855c-48pzz" event={"ID":"a4b136cc-b72d-40d0-bf33-d7edb2b6a514","Type":"ContainerDied","Data":"93f0f9869ced1826b13541bde81e8bc3785b75fe7e1418d6f68f4f5c13fbd6ff"} Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.250333 4932 scope.go:117] "RemoveContainer" containerID="03ca4fbc9aad7b995c9b73e8a22247ec7d378b8cd3a7385178a302ed1389d2a3" Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.255401 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c8728f16-950d-456c-865c-87365e4bc418","Type":"ContainerStarted","Data":"0d53fd20ef1f78782ae12aaa1096a821bbc0b5aba9322e383a6768c4d1ad7a25"} Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.316408 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6558dc855c-48pzz"] Mar 21 09:18:20 crc kubenswrapper[4932]: I0321 09:18:20.328048 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6558dc855c-48pzz"] Mar 21 09:18:21 crc kubenswrapper[4932]: I0321 09:18:21.264469 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c8728f16-950d-456c-865c-87365e4bc418","Type":"ContainerStarted","Data":"ffbd3481af5271578694cf2ec321dc767bc3c0ab290d657fc14eff63c20d0c1b"} Mar 21 09:18:21 crc kubenswrapper[4932]: I0321 09:18:21.268055 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" event={"ID":"699d7b1e-6190-4000-8035-0a2c288a53f7","Type":"ContainerStarted","Data":"c3889e2a0364a53dc1ad530d17d44adac71cdcef9519581b9cd6b90974513c13"} Mar 21 09:18:21 crc kubenswrapper[4932]: I0321 09:18:21.268130 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:21 crc kubenswrapper[4932]: I0321 09:18:21.311725 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=13.311703523 podStartE2EDuration="13.311703523s" podCreationTimestamp="2026-03-21 09:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:21.289709663 +0000 UTC m=+1204.884907942" watchObservedRunningTime="2026-03-21 09:18:21.311703523 +0000 UTC m=+1204.906901862" Mar 21 09:18:21 crc kubenswrapper[4932]: I0321 09:18:21.718952 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b136cc-b72d-40d0-bf33-d7edb2b6a514" path="/var/lib/kubelet/pods/a4b136cc-b72d-40d0-bf33-d7edb2b6a514/volumes" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.125590 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.156023 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" podStartSLOduration=4.156001519 podStartE2EDuration="4.156001519s" podCreationTimestamp="2026-03-21 09:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:21.322301521 +0000 UTC m=+1204.917499790" watchObservedRunningTime="2026-03-21 09:18:22.156001519 +0000 UTC m=+1205.751199798" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.544041 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rc447"] Mar 21 09:18:22 crc kubenswrapper[4932]: E0321 09:18:22.544489 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b136cc-b72d-40d0-bf33-d7edb2b6a514" containerName="init" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.544505 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b136cc-b72d-40d0-bf33-d7edb2b6a514" containerName="init" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.544673 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b136cc-b72d-40d0-bf33-d7edb2b6a514" containerName="init" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.545351 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rc447" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.581292 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rc447"] Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.654769 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvzd\" (UniqueName: \"kubernetes.io/projected/6cd399df-1028-4cf8-bf73-307464772e8a-kube-api-access-ddvzd\") pod \"barbican-db-create-rc447\" (UID: \"6cd399df-1028-4cf8-bf73-307464772e8a\") " pod="openstack/barbican-db-create-rc447" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.654825 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd399df-1028-4cf8-bf73-307464772e8a-operator-scripts\") pod \"barbican-db-create-rc447\" (UID: \"6cd399df-1028-4cf8-bf73-307464772e8a\") " pod="openstack/barbican-db-create-rc447" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.659208 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b419-account-create-update-jjq88"] Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.660380 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.662541 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.680632 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b419-account-create-update-jjq88"] Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.737223 4932 scope.go:117] "RemoveContainer" containerID="115fbe92689e03821ad165ba1b6b73717ae7980cfc9e1510e94f0b67193313bc" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.756794 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd399df-1028-4cf8-bf73-307464772e8a-operator-scripts\") pod \"barbican-db-create-rc447\" (UID: \"6cd399df-1028-4cf8-bf73-307464772e8a\") " pod="openstack/barbican-db-create-rc447" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.756979 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvzd\" (UniqueName: \"kubernetes.io/projected/6cd399df-1028-4cf8-bf73-307464772e8a-kube-api-access-ddvzd\") pod \"barbican-db-create-rc447\" (UID: \"6cd399df-1028-4cf8-bf73-307464772e8a\") " pod="openstack/barbican-db-create-rc447" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.757880 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd399df-1028-4cf8-bf73-307464772e8a-operator-scripts\") pod \"barbican-db-create-rc447\" (UID: \"6cd399df-1028-4cf8-bf73-307464772e8a\") " pod="openstack/barbican-db-create-rc447" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.773012 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qp2p8"] Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.774413 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.782779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvzd\" (UniqueName: \"kubernetes.io/projected/6cd399df-1028-4cf8-bf73-307464772e8a-kube-api-access-ddvzd\") pod \"barbican-db-create-rc447\" (UID: \"6cd399df-1028-4cf8-bf73-307464772e8a\") " pod="openstack/barbican-db-create-rc447" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.806127 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qp2p8"] Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.858224 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa143683-f786-4613-aed7-95a17c40f484-operator-scripts\") pod \"barbican-b419-account-create-update-jjq88\" (UID: \"aa143683-f786-4613-aed7-95a17c40f484\") " pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.858264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgjrq\" (UniqueName: \"kubernetes.io/projected/aa143683-f786-4613-aed7-95a17c40f484-kube-api-access-sgjrq\") pod \"barbican-b419-account-create-update-jjq88\" (UID: \"aa143683-f786-4613-aed7-95a17c40f484\") " pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.865215 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b425-account-create-update-n4dzt"] Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.866751 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.870812 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rc447" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.870867 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.874140 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b425-account-create-update-n4dzt"] Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.887340 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.960202 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9ch\" (UniqueName: \"kubernetes.io/projected/4359dfaa-1096-47af-a540-db559c28d15e-kube-api-access-gx9ch\") pod \"cinder-db-create-qp2p8\" (UID: \"4359dfaa-1096-47af-a540-db559c28d15e\") " pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.960327 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa143683-f786-4613-aed7-95a17c40f484-operator-scripts\") pod \"barbican-b419-account-create-update-jjq88\" (UID: \"aa143683-f786-4613-aed7-95a17c40f484\") " pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.960376 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgjrq\" (UniqueName: \"kubernetes.io/projected/aa143683-f786-4613-aed7-95a17c40f484-kube-api-access-sgjrq\") pod \"barbican-b419-account-create-update-jjq88\" (UID: \"aa143683-f786-4613-aed7-95a17c40f484\") " pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.960400 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4359dfaa-1096-47af-a540-db559c28d15e-operator-scripts\") pod \"cinder-db-create-qp2p8\" (UID: \"4359dfaa-1096-47af-a540-db559c28d15e\") " pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.961151 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa143683-f786-4613-aed7-95a17c40f484-operator-scripts\") pod \"barbican-b419-account-create-update-jjq88\" (UID: \"aa143683-f786-4613-aed7-95a17c40f484\") " pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:22 crc kubenswrapper[4932]: I0321 09:18:22.972520 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.007108 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qhcqk"] Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.008279 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.017175 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgjrq\" (UniqueName: \"kubernetes.io/projected/aa143683-f786-4613-aed7-95a17c40f484-kube-api-access-sgjrq\") pod \"barbican-b419-account-create-update-jjq88\" (UID: \"aa143683-f786-4613-aed7-95a17c40f484\") " pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.021007 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fkdzk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.022321 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.022549 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.022668 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.062611 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc77p\" (UniqueName: \"kubernetes.io/projected/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-kube-api-access-tc77p\") pod \"cinder-b425-account-create-update-n4dzt\" (UID: \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\") " pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.062673 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-operator-scripts\") pod \"cinder-b425-account-create-update-n4dzt\" (UID: \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\") " pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.062714 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4359dfaa-1096-47af-a540-db559c28d15e-operator-scripts\") pod \"cinder-db-create-qp2p8\" (UID: \"4359dfaa-1096-47af-a540-db559c28d15e\") " pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.062808 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx9ch\" (UniqueName: \"kubernetes.io/projected/4359dfaa-1096-47af-a540-db559c28d15e-kube-api-access-gx9ch\") pod \"cinder-db-create-qp2p8\" (UID: \"4359dfaa-1096-47af-a540-db559c28d15e\") " pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.064122 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4359dfaa-1096-47af-a540-db559c28d15e-operator-scripts\") pod \"cinder-db-create-qp2p8\" (UID: \"4359dfaa-1096-47af-a540-db559c28d15e\") " pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.073553 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qhcqk"] Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.125592 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx9ch\" (UniqueName: \"kubernetes.io/projected/4359dfaa-1096-47af-a540-db559c28d15e-kube-api-access-gx9ch\") pod \"cinder-db-create-qp2p8\" (UID: \"4359dfaa-1096-47af-a540-db559c28d15e\") " pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.156772 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.166455 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpgb\" (UniqueName: \"kubernetes.io/projected/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-kube-api-access-5xpgb\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.166575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc77p\" (UniqueName: \"kubernetes.io/projected/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-kube-api-access-tc77p\") pod \"cinder-b425-account-create-update-n4dzt\" (UID: \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\") " pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.166615 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-operator-scripts\") pod \"cinder-b425-account-create-update-n4dzt\" (UID: \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\") " pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.166696 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-combined-ca-bundle\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.166726 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-config-data\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.168126 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-operator-scripts\") pod \"cinder-b425-account-create-update-n4dzt\" (UID: \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\") " pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.219179 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc77p\" (UniqueName: \"kubernetes.io/projected/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-kube-api-access-tc77p\") pod \"cinder-b425-account-create-update-n4dzt\" (UID: \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\") " pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.268260 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-combined-ca-bundle\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.268334 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-config-data\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.268401 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpgb\" (UniqueName: \"kubernetes.io/projected/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-kube-api-access-5xpgb\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.273926 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-combined-ca-bundle\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.282896 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.283618 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-config-data\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.331746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpgb\" (UniqueName: \"kubernetes.io/projected/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-kube-api-access-5xpgb\") pod \"keystone-db-sync-qhcqk\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.394937 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.500178 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.500223 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.500375 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.511995 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.730870 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rc447"] Mar 21 09:18:23 crc kubenswrapper[4932]: W0321 09:18:23.736849 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cd399df_1028_4cf8_bf73_307464772e8a.slice/crio-b4e0be9a24e69b3a917dad95c860c5fb8e332ab2c30fd93104d03a352e806a58 WatchSource:0}: Error finding container b4e0be9a24e69b3a917dad95c860c5fb8e332ab2c30fd93104d03a352e806a58: Status 404 returned error can't find the container with id b4e0be9a24e69b3a917dad95c860c5fb8e332ab2c30fd93104d03a352e806a58 Mar 21 09:18:23 crc kubenswrapper[4932]: I0321 09:18:23.871295 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qp2p8"] Mar 21 09:18:24 crc kubenswrapper[4932]: W0321 09:18:24.020348 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb62a93b2_b391_4a7f_b430_dd09d30cc6b0.slice/crio-34703777dea14ea3547626545d518f1959d2d3bb3015c518fe68cbc58aa35b0a WatchSource:0}: Error finding container 34703777dea14ea3547626545d518f1959d2d3bb3015c518fe68cbc58aa35b0a: Status 404 returned error can't find the container with id 34703777dea14ea3547626545d518f1959d2d3bb3015c518fe68cbc58aa35b0a Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.035321 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qhcqk"] Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.063261 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b419-account-create-update-jjq88"] Mar 21 09:18:24 crc kubenswrapper[4932]: W0321 09:18:24.063755 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa143683_f786_4613_aed7_95a17c40f484.slice/crio-5b3a2ce4c2c31fedfac5eca461bac7760b26bfc36614e32109f950d023569641 WatchSource:0}: Error finding container 5b3a2ce4c2c31fedfac5eca461bac7760b26bfc36614e32109f950d023569641: Status 404 returned error can't find the container with id 5b3a2ce4c2c31fedfac5eca461bac7760b26bfc36614e32109f950d023569641 Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.092288 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b425-account-create-update-n4dzt"] Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.309283 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rc447" event={"ID":"6cd399df-1028-4cf8-bf73-307464772e8a","Type":"ContainerStarted","Data":"3abdbd41efd3138bd887ca71fb22a7f28afc3f1b53fd069611607089e0277fc0"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.309330 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rc447" event={"ID":"6cd399df-1028-4cf8-bf73-307464772e8a","Type":"ContainerStarted","Data":"b4e0be9a24e69b3a917dad95c860c5fb8e332ab2c30fd93104d03a352e806a58"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.312947 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b425-account-create-update-n4dzt" event={"ID":"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968","Type":"ContainerStarted","Data":"f40de84196136633e15322532b0a480556d28828673d9290289b0c5abf2989c4"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.312994 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b425-account-create-update-n4dzt" event={"ID":"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968","Type":"ContainerStarted","Data":"6310ecb057c5438057b53b60c659c0fd94af9054c3cf4d14daaee49c5253e681"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.315584 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhcqk" event={"ID":"b62a93b2-b391-4a7f-b430-dd09d30cc6b0","Type":"ContainerStarted","Data":"34703777dea14ea3547626545d518f1959d2d3bb3015c518fe68cbc58aa35b0a"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.317419 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b419-account-create-update-jjq88" event={"ID":"aa143683-f786-4613-aed7-95a17c40f484","Type":"ContainerStarted","Data":"078fd7e3900ba84c33dc3c709dd94d1060c6a80cd6ae21cd8f790945b13c549f"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.317470 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b419-account-create-update-jjq88" event={"ID":"aa143683-f786-4613-aed7-95a17c40f484","Type":"ContainerStarted","Data":"5b3a2ce4c2c31fedfac5eca461bac7760b26bfc36614e32109f950d023569641"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.320697 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qp2p8" event={"ID":"4359dfaa-1096-47af-a540-db559c28d15e","Type":"ContainerStarted","Data":"d8dc86c56ad1ecb876a969a396e3fe533c8e69995907fffdadb52b455105f840"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.320740 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qp2p8" event={"ID":"4359dfaa-1096-47af-a540-db559c28d15e","Type":"ContainerStarted","Data":"4542c3a5127a54d3cf95fdb4bb7c3b69e7e8420ef330882f70aa826d9f381cc1"} Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.329984 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.337152 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-rc447" podStartSLOduration=2.337128247 podStartE2EDuration="2.337128247s" podCreationTimestamp="2026-03-21 09:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:24.325522547 +0000 UTC m=+1207.920720816" watchObservedRunningTime="2026-03-21 09:18:24.337128247 +0000 UTC m=+1207.932326516" Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.341526 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b425-account-create-update-n4dzt" podStartSLOduration=2.3415069219999998 podStartE2EDuration="2.341506922s" podCreationTimestamp="2026-03-21 09:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:24.340142219 +0000 UTC m=+1207.935340498" watchObservedRunningTime="2026-03-21 09:18:24.341506922 +0000 UTC m=+1207.936705191" Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.361509 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-b419-account-create-update-jjq88" podStartSLOduration=2.36145865 podStartE2EDuration="2.36145865s" podCreationTimestamp="2026-03-21 09:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:24.357434555 +0000 UTC m=+1207.952632844" watchObservedRunningTime="2026-03-21 09:18:24.36145865 +0000 UTC m=+1207.956656909" Mar 21 09:18:24 crc kubenswrapper[4932]: I0321 09:18:24.375521 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qp2p8" podStartSLOduration=2.375502754 podStartE2EDuration="2.375502754s" podCreationTimestamp="2026-03-21 09:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:24.374772631 +0000 UTC m=+1207.969970900" watchObservedRunningTime="2026-03-21 09:18:24.375502754 +0000 UTC m=+1207.970701023" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.346006 4932 generic.go:334] "Generic (PLEG): container finished" podID="6cd399df-1028-4cf8-bf73-307464772e8a" containerID="3abdbd41efd3138bd887ca71fb22a7f28afc3f1b53fd069611607089e0277fc0" exitCode=0 Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.346301 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rc447" event={"ID":"6cd399df-1028-4cf8-bf73-307464772e8a","Type":"ContainerDied","Data":"3abdbd41efd3138bd887ca71fb22a7f28afc3f1b53fd069611607089e0277fc0"} Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.348281 4932 generic.go:334] "Generic (PLEG): container finished" podID="8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968" containerID="f40de84196136633e15322532b0a480556d28828673d9290289b0c5abf2989c4" exitCode=0 Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.348368 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b425-account-create-update-n4dzt" event={"ID":"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968","Type":"ContainerDied","Data":"f40de84196136633e15322532b0a480556d28828673d9290289b0c5abf2989c4"} Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.350174 4932 generic.go:334] "Generic (PLEG): container finished" podID="aa143683-f786-4613-aed7-95a17c40f484" containerID="078fd7e3900ba84c33dc3c709dd94d1060c6a80cd6ae21cd8f790945b13c549f" exitCode=0 Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.350223 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b419-account-create-update-jjq88" event={"ID":"aa143683-f786-4613-aed7-95a17c40f484","Type":"ContainerDied","Data":"078fd7e3900ba84c33dc3c709dd94d1060c6a80cd6ae21cd8f790945b13c549f"} Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.352562 4932 generic.go:334] "Generic (PLEG): container finished" podID="4359dfaa-1096-47af-a540-db559c28d15e" containerID="d8dc86c56ad1ecb876a969a396e3fe533c8e69995907fffdadb52b455105f840" exitCode=0 Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.352838 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qp2p8" event={"ID":"4359dfaa-1096-47af-a540-db559c28d15e","Type":"ContainerDied","Data":"d8dc86c56ad1ecb876a969a396e3fe533c8e69995907fffdadb52b455105f840"} Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.524922 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rkf9k"] Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.526497 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.555065 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rkf9k"] Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.581597 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-k7vwh"] Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.583005 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.585889 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-fmcmm" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.586102 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.591597 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-k7vwh"] Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.627483 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619c392-3f57-4cf2-9f7b-880c8f672365-operator-scripts\") pod \"neutron-db-create-rkf9k\" (UID: \"c619c392-3f57-4cf2-9f7b-880c8f672365\") " pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.628666 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zcg6\" (UniqueName: \"kubernetes.io/projected/c619c392-3f57-4cf2-9f7b-880c8f672365-kube-api-access-8zcg6\") pod \"neutron-db-create-rkf9k\" (UID: \"c619c392-3f57-4cf2-9f7b-880c8f672365\") " pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.647556 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0698-account-create-update-7r5n2"] Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.649061 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.652291 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.664986 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0698-account-create-update-7r5n2"] Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.729905 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619c392-3f57-4cf2-9f7b-880c8f672365-operator-scripts\") pod \"neutron-db-create-rkf9k\" (UID: \"c619c392-3f57-4cf2-9f7b-880c8f672365\") " pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.729984 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-combined-ca-bundle\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.730027 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zcg6\" (UniqueName: \"kubernetes.io/projected/c619c392-3f57-4cf2-9f7b-880c8f672365-kube-api-access-8zcg6\") pod \"neutron-db-create-rkf9k\" (UID: \"c619c392-3f57-4cf2-9f7b-880c8f672365\") " pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.730058 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-config-data\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.730121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww8hs\" (UniqueName: \"kubernetes.io/projected/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-kube-api-access-ww8hs\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.730145 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-db-sync-config-data\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.731004 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619c392-3f57-4cf2-9f7b-880c8f672365-operator-scripts\") pod \"neutron-db-create-rkf9k\" (UID: \"c619c392-3f57-4cf2-9f7b-880c8f672365\") " pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.751293 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zcg6\" (UniqueName: \"kubernetes.io/projected/c619c392-3f57-4cf2-9f7b-880c8f672365-kube-api-access-8zcg6\") pod \"neutron-db-create-rkf9k\" (UID: \"c619c392-3f57-4cf2-9f7b-880c8f672365\") " pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.831468 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffp6p\" (UniqueName: \"kubernetes.io/projected/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-kube-api-access-ffp6p\") pod \"neutron-0698-account-create-update-7r5n2\" (UID: \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\") " pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.831546 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-combined-ca-bundle\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.831607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-config-data\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.831678 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-operator-scripts\") pod \"neutron-0698-account-create-update-7r5n2\" (UID: \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\") " pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.831701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww8hs\" (UniqueName: \"kubernetes.io/projected/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-kube-api-access-ww8hs\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.831730 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-db-sync-config-data\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.841177 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-config-data\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.842998 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-combined-ca-bundle\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.847125 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-db-sync-config-data\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.859471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww8hs\" (UniqueName: \"kubernetes.io/projected/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-kube-api-access-ww8hs\") pod \"watcher-db-sync-k7vwh\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.868809 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.904512 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.936044 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-operator-scripts\") pod \"neutron-0698-account-create-update-7r5n2\" (UID: \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\") " pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.936145 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffp6p\" (UniqueName: \"kubernetes.io/projected/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-kube-api-access-ffp6p\") pod \"neutron-0698-account-create-update-7r5n2\" (UID: \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\") " pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.937408 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-operator-scripts\") pod \"neutron-0698-account-create-update-7r5n2\" (UID: \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\") " pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.957328 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffp6p\" (UniqueName: \"kubernetes.io/projected/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-kube-api-access-ffp6p\") pod \"neutron-0698-account-create-update-7r5n2\" (UID: \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\") " pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:25 crc kubenswrapper[4932]: I0321 09:18:25.971978 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.743001 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.751437 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.796644 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rc447" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.802123 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.889582 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddvzd\" (UniqueName: \"kubernetes.io/projected/6cd399df-1028-4cf8-bf73-307464772e8a-kube-api-access-ddvzd\") pod \"6cd399df-1028-4cf8-bf73-307464772e8a\" (UID: \"6cd399df-1028-4cf8-bf73-307464772e8a\") " Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.889633 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc77p\" (UniqueName: \"kubernetes.io/projected/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-kube-api-access-tc77p\") pod \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\" (UID: \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\") " Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.889708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4359dfaa-1096-47af-a540-db559c28d15e-operator-scripts\") pod \"4359dfaa-1096-47af-a540-db559c28d15e\" (UID: \"4359dfaa-1096-47af-a540-db559c28d15e\") " Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.889730 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-operator-scripts\") pod \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\" (UID: \"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968\") " Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.889830 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa143683-f786-4613-aed7-95a17c40f484-operator-scripts\") pod \"aa143683-f786-4613-aed7-95a17c40f484\" (UID: \"aa143683-f786-4613-aed7-95a17c40f484\") " Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.889852 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx9ch\" (UniqueName: \"kubernetes.io/projected/4359dfaa-1096-47af-a540-db559c28d15e-kube-api-access-gx9ch\") pod \"4359dfaa-1096-47af-a540-db559c28d15e\" (UID: \"4359dfaa-1096-47af-a540-db559c28d15e\") " Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.889898 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgjrq\" (UniqueName: \"kubernetes.io/projected/aa143683-f786-4613-aed7-95a17c40f484-kube-api-access-sgjrq\") pod \"aa143683-f786-4613-aed7-95a17c40f484\" (UID: \"aa143683-f786-4613-aed7-95a17c40f484\") " Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.889953 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd399df-1028-4cf8-bf73-307464772e8a-operator-scripts\") pod \"6cd399df-1028-4cf8-bf73-307464772e8a\" (UID: \"6cd399df-1028-4cf8-bf73-307464772e8a\") " Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.890685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd399df-1028-4cf8-bf73-307464772e8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cd399df-1028-4cf8-bf73-307464772e8a" (UID: "6cd399df-1028-4cf8-bf73-307464772e8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.890728 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968" (UID: "8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.891086 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa143683-f786-4613-aed7-95a17c40f484-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa143683-f786-4613-aed7-95a17c40f484" (UID: "aa143683-f786-4613-aed7-95a17c40f484"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.891516 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4359dfaa-1096-47af-a540-db559c28d15e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4359dfaa-1096-47af-a540-db559c28d15e" (UID: "4359dfaa-1096-47af-a540-db559c28d15e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.895267 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd399df-1028-4cf8-bf73-307464772e8a-kube-api-access-ddvzd" (OuterVolumeSpecName: "kube-api-access-ddvzd") pod "6cd399df-1028-4cf8-bf73-307464772e8a" (UID: "6cd399df-1028-4cf8-bf73-307464772e8a"). InnerVolumeSpecName "kube-api-access-ddvzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.895363 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-kube-api-access-tc77p" (OuterVolumeSpecName: "kube-api-access-tc77p") pod "8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968" (UID: "8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968"). InnerVolumeSpecName "kube-api-access-tc77p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.897709 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4359dfaa-1096-47af-a540-db559c28d15e-kube-api-access-gx9ch" (OuterVolumeSpecName: "kube-api-access-gx9ch") pod "4359dfaa-1096-47af-a540-db559c28d15e" (UID: "4359dfaa-1096-47af-a540-db559c28d15e"). InnerVolumeSpecName "kube-api-access-gx9ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.898866 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa143683-f786-4613-aed7-95a17c40f484-kube-api-access-sgjrq" (OuterVolumeSpecName: "kube-api-access-sgjrq") pod "aa143683-f786-4613-aed7-95a17c40f484" (UID: "aa143683-f786-4613-aed7-95a17c40f484"). InnerVolumeSpecName "kube-api-access-sgjrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.900048 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.964084 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-pwgtb"] Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.964326 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" podUID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" containerName="dnsmasq-dns" containerID="cri-o://4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62" gracePeriod=10 Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.992101 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc77p\" (UniqueName: \"kubernetes.io/projected/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-kube-api-access-tc77p\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.992138 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4359dfaa-1096-47af-a540-db559c28d15e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.992151 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.992161 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa143683-f786-4613-aed7-95a17c40f484-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.992173 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx9ch\" (UniqueName: \"kubernetes.io/projected/4359dfaa-1096-47af-a540-db559c28d15e-kube-api-access-gx9ch\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.992184 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgjrq\" (UniqueName: \"kubernetes.io/projected/aa143683-f786-4613-aed7-95a17c40f484-kube-api-access-sgjrq\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.992195 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cd399df-1028-4cf8-bf73-307464772e8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:28 crc kubenswrapper[4932]: I0321 09:18:28.992203 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddvzd\" (UniqueName: \"kubernetes.io/projected/6cd399df-1028-4cf8-bf73-307464772e8a-kube-api-access-ddvzd\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.029337 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0698-account-create-update-7r5n2"] Mar 21 09:18:29 crc kubenswrapper[4932]: W0321 09:18:29.057533 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb733e0fd_d745_40bf_be43_1b3fdfa9d1ae.slice/crio-fd1aaeca83cc4f7c90b4a9b7d35a7c27154dff3d1b713ac03fb8692cb6abfd97 WatchSource:0}: Error finding container fd1aaeca83cc4f7c90b4a9b7d35a7c27154dff3d1b713ac03fb8692cb6abfd97: Status 404 returned error can't find the container with id fd1aaeca83cc4f7c90b4a9b7d35a7c27154dff3d1b713ac03fb8692cb6abfd97 Mar 21 09:18:29 crc kubenswrapper[4932]: W0321 09:18:29.058943 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd481d0_7b76_4ce4_9b88_4f8d37125f7e.slice/crio-77103851da0f9eb60efe8fd0c411a47f56d1635fcea5fbfa885c2b32d30c565b WatchSource:0}: Error finding container 77103851da0f9eb60efe8fd0c411a47f56d1635fcea5fbfa885c2b32d30c565b: Status 404 returned error can't find the container with id 77103851da0f9eb60efe8fd0c411a47f56d1635fcea5fbfa885c2b32d30c565b Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.073809 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-k7vwh"] Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.152015 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rkf9k"] Mar 21 09:18:29 crc kubenswrapper[4932]: W0321 09:18:29.165571 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc619c392_3f57_4cf2_9f7b_880c8f672365.slice/crio-27f0269c4dd52c730889e1ba7217f16c70b5afa0f9ad17c79b10ef2627f96aba WatchSource:0}: Error finding container 27f0269c4dd52c730889e1ba7217f16c70b5afa0f9ad17c79b10ef2627f96aba: Status 404 returned error can't find the container with id 27f0269c4dd52c730889e1ba7217f16c70b5afa0f9ad17c79b10ef2627f96aba Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.407951 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.414887 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b425-account-create-update-n4dzt" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.414918 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b425-account-create-update-n4dzt" event={"ID":"8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968","Type":"ContainerDied","Data":"6310ecb057c5438057b53b60c659c0fd94af9054c3cf4d14daaee49c5253e681"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.414967 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6310ecb057c5438057b53b60c659c0fd94af9054c3cf4d14daaee49c5253e681" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.419246 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0698-account-create-update-7r5n2" event={"ID":"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae","Type":"ContainerStarted","Data":"d264d5116d08eca72edf1910e4cc3259535a7f665f84c76ecc73413de419d3e9"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.419290 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0698-account-create-update-7r5n2" event={"ID":"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae","Type":"ContainerStarted","Data":"fd1aaeca83cc4f7c90b4a9b7d35a7c27154dff3d1b713ac03fb8692cb6abfd97"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.426574 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-k7vwh" event={"ID":"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e","Type":"ContainerStarted","Data":"77103851da0f9eb60efe8fd0c411a47f56d1635fcea5fbfa885c2b32d30c565b"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.432239 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rkf9k" event={"ID":"c619c392-3f57-4cf2-9f7b-880c8f672365","Type":"ContainerStarted","Data":"5e60d67350769822ee75069a6f072b13e1b777adb01689bbe512cf8c0982a267"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.432303 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rkf9k" event={"ID":"c619c392-3f57-4cf2-9f7b-880c8f672365","Type":"ContainerStarted","Data":"27f0269c4dd52c730889e1ba7217f16c70b5afa0f9ad17c79b10ef2627f96aba"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.463309 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rc447" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.464313 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rc447" event={"ID":"6cd399df-1028-4cf8-bf73-307464772e8a","Type":"ContainerDied","Data":"b4e0be9a24e69b3a917dad95c860c5fb8e332ab2c30fd93104d03a352e806a58"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.464402 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e0be9a24e69b3a917dad95c860c5fb8e332ab2c30fd93104d03a352e806a58" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.469135 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-rkf9k" podStartSLOduration=4.469110968 podStartE2EDuration="4.469110968s" podCreationTimestamp="2026-03-21 09:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:29.460034298 +0000 UTC m=+1213.055232567" watchObservedRunningTime="2026-03-21 09:18:29.469110968 +0000 UTC m=+1213.064309237" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.482766 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0698-account-create-update-7r5n2" podStartSLOduration=4.482745421 podStartE2EDuration="4.482745421s" podCreationTimestamp="2026-03-21 09:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:29.478517609 +0000 UTC m=+1213.073715878" watchObservedRunningTime="2026-03-21 09:18:29.482745421 +0000 UTC m=+1213.077943690" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.491290 4932 generic.go:334] "Generic (PLEG): container finished" podID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" containerID="4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62" exitCode=0 Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.491394 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" event={"ID":"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0","Type":"ContainerDied","Data":"4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.491422 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" event={"ID":"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0","Type":"ContainerDied","Data":"63fed95f74552f54342f03b9646f8eeeb20f5a2d87b255a6d207170f3744c2e6"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.491441 4932 scope.go:117] "RemoveContainer" containerID="4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.491598 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95ff5b97-pwgtb" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.500133 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-config\") pod \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.500362 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-sb\") pod \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.500386 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-nb\") pod \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.500405 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-dns-svc\") pod \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.500453 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsf72\" (UniqueName: \"kubernetes.io/projected/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-kube-api-access-lsf72\") pod \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\" (UID: \"ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0\") " Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.502874 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhcqk" event={"ID":"b62a93b2-b391-4a7f-b430-dd09d30cc6b0","Type":"ContainerStarted","Data":"e38075dd4e1927be6cf2db4967c264b8d09c9fb85b43dd8a231e6ae1bdbc15e0"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.509716 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-kube-api-access-lsf72" (OuterVolumeSpecName: "kube-api-access-lsf72") pod "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" (UID: "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0"). InnerVolumeSpecName "kube-api-access-lsf72". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.513157 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b419-account-create-update-jjq88" event={"ID":"aa143683-f786-4613-aed7-95a17c40f484","Type":"ContainerDied","Data":"5b3a2ce4c2c31fedfac5eca461bac7760b26bfc36614e32109f950d023569641"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.513203 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b419-account-create-update-jjq88" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.513209 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b3a2ce4c2c31fedfac5eca461bac7760b26bfc36614e32109f950d023569641" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.527985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qp2p8" event={"ID":"4359dfaa-1096-47af-a540-db559c28d15e","Type":"ContainerDied","Data":"4542c3a5127a54d3cf95fdb4bb7c3b69e7e8420ef330882f70aa826d9f381cc1"} Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.528034 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4542c3a5127a54d3cf95fdb4bb7c3b69e7e8420ef330882f70aa826d9f381cc1" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.528099 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qp2p8" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.533808 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qhcqk" podStartSLOduration=2.9817202800000002 podStartE2EDuration="7.533786s" podCreationTimestamp="2026-03-21 09:18:22 +0000 UTC" firstStartedPulling="2026-03-21 09:18:24.023190989 +0000 UTC m=+1207.618389248" lastFinishedPulling="2026-03-21 09:18:28.575256699 +0000 UTC m=+1212.170454968" observedRunningTime="2026-03-21 09:18:29.525311588 +0000 UTC m=+1213.120509857" watchObservedRunningTime="2026-03-21 09:18:29.533786 +0000 UTC m=+1213.128984269" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.545181 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" (UID: "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.545250 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" (UID: "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.566618 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-config" (OuterVolumeSpecName: "config") pod "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" (UID: "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.581006 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" (UID: "ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.603238 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.603314 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.603324 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.603332 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.603342 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsf72\" (UniqueName: \"kubernetes.io/projected/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0-kube-api-access-lsf72\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.789642 4932 scope.go:117] "RemoveContainer" containerID="646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.826407 4932 scope.go:117] "RemoveContainer" containerID="4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.832234 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-pwgtb"] Mar 21 09:18:29 crc kubenswrapper[4932]: E0321 09:18:29.835889 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62\": container with ID starting with 4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62 not found: ID does not exist" containerID="4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.835946 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62"} err="failed to get container status \"4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62\": rpc error: code = NotFound desc = could not find container \"4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62\": container with ID starting with 4bc8d8b4cd34bda6cea9511aca17beaa1e5530be4799c8782d86c4c1105dcc62 not found: ID does not exist" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.835975 4932 scope.go:117] "RemoveContainer" containerID="646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6" Mar 21 09:18:29 crc kubenswrapper[4932]: E0321 09:18:29.837457 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6\": container with ID starting with 646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6 not found: ID does not exist" containerID="646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.837531 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6"} err="failed to get container status \"646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6\": rpc error: code = NotFound desc = could not find container \"646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6\": container with ID starting with 646b61fc1b210cc0fec2c145f2688a7bb887150b9cf89abfac8eda6b8901eec6 not found: ID does not exist" Mar 21 09:18:29 crc kubenswrapper[4932]: I0321 09:18:29.853792 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d95ff5b97-pwgtb"] Mar 21 09:18:30 crc kubenswrapper[4932]: I0321 09:18:30.225255 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:18:30 crc kubenswrapper[4932]: I0321 09:18:30.225338 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:18:30 crc kubenswrapper[4932]: I0321 09:18:30.537970 4932 generic.go:334] "Generic (PLEG): container finished" podID="b733e0fd-d745-40bf-be43-1b3fdfa9d1ae" containerID="d264d5116d08eca72edf1910e4cc3259535a7f665f84c76ecc73413de419d3e9" exitCode=0 Mar 21 09:18:30 crc kubenswrapper[4932]: I0321 09:18:30.538023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0698-account-create-update-7r5n2" event={"ID":"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae","Type":"ContainerDied","Data":"d264d5116d08eca72edf1910e4cc3259535a7f665f84c76ecc73413de419d3e9"} Mar 21 09:18:30 crc kubenswrapper[4932]: I0321 09:18:30.540860 4932 generic.go:334] "Generic (PLEG): container finished" podID="c619c392-3f57-4cf2-9f7b-880c8f672365" containerID="5e60d67350769822ee75069a6f072b13e1b777adb01689bbe512cf8c0982a267" exitCode=0 Mar 21 09:18:30 crc kubenswrapper[4932]: I0321 09:18:30.541957 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rkf9k" event={"ID":"c619c392-3f57-4cf2-9f7b-880c8f672365","Type":"ContainerDied","Data":"5e60d67350769822ee75069a6f072b13e1b777adb01689bbe512cf8c0982a267"} Mar 21 09:18:31 crc kubenswrapper[4932]: I0321 09:18:31.711869 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" path="/var/lib/kubelet/pods/ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0/volumes" Mar 21 09:18:33 crc kubenswrapper[4932]: I0321 09:18:33.568883 4932 generic.go:334] "Generic (PLEG): container finished" podID="b62a93b2-b391-4a7f-b430-dd09d30cc6b0" containerID="e38075dd4e1927be6cf2db4967c264b8d09c9fb85b43dd8a231e6ae1bdbc15e0" exitCode=0 Mar 21 09:18:33 crc kubenswrapper[4932]: I0321 09:18:33.568919 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhcqk" event={"ID":"b62a93b2-b391-4a7f-b430-dd09d30cc6b0","Type":"ContainerDied","Data":"e38075dd4e1927be6cf2db4967c264b8d09c9fb85b43dd8a231e6ae1bdbc15e0"} Mar 21 09:18:36 crc kubenswrapper[4932]: I0321 09:18:36.981928 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.005938 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.007206 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.157256 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-operator-scripts\") pod \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\" (UID: \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\") " Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.157300 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619c392-3f57-4cf2-9f7b-880c8f672365-operator-scripts\") pod \"c619c392-3f57-4cf2-9f7b-880c8f672365\" (UID: \"c619c392-3f57-4cf2-9f7b-880c8f672365\") " Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.157325 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xpgb\" (UniqueName: \"kubernetes.io/projected/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-kube-api-access-5xpgb\") pod \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.157398 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zcg6\" (UniqueName: \"kubernetes.io/projected/c619c392-3f57-4cf2-9f7b-880c8f672365-kube-api-access-8zcg6\") pod \"c619c392-3f57-4cf2-9f7b-880c8f672365\" (UID: \"c619c392-3f57-4cf2-9f7b-880c8f672365\") " Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.157435 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-config-data\") pod \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.157552 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffp6p\" (UniqueName: \"kubernetes.io/projected/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-kube-api-access-ffp6p\") pod \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\" (UID: \"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae\") " Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.157906 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b733e0fd-d745-40bf-be43-1b3fdfa9d1ae" (UID: "b733e0fd-d745-40bf-be43-1b3fdfa9d1ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.158080 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-combined-ca-bundle\") pod \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\" (UID: \"b62a93b2-b391-4a7f-b430-dd09d30cc6b0\") " Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.158135 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c619c392-3f57-4cf2-9f7b-880c8f672365-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c619c392-3f57-4cf2-9f7b-880c8f672365" (UID: "c619c392-3f57-4cf2-9f7b-880c8f672365"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.158677 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.158717 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619c392-3f57-4cf2-9f7b-880c8f672365-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.166426 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-kube-api-access-5xpgb" (OuterVolumeSpecName: "kube-api-access-5xpgb") pod "b62a93b2-b391-4a7f-b430-dd09d30cc6b0" (UID: "b62a93b2-b391-4a7f-b430-dd09d30cc6b0"). InnerVolumeSpecName "kube-api-access-5xpgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.167024 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-kube-api-access-ffp6p" (OuterVolumeSpecName: "kube-api-access-ffp6p") pod "b733e0fd-d745-40bf-be43-1b3fdfa9d1ae" (UID: "b733e0fd-d745-40bf-be43-1b3fdfa9d1ae"). InnerVolumeSpecName "kube-api-access-ffp6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.167385 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c619c392-3f57-4cf2-9f7b-880c8f672365-kube-api-access-8zcg6" (OuterVolumeSpecName: "kube-api-access-8zcg6") pod "c619c392-3f57-4cf2-9f7b-880c8f672365" (UID: "c619c392-3f57-4cf2-9f7b-880c8f672365"). InnerVolumeSpecName "kube-api-access-8zcg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.197038 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b62a93b2-b391-4a7f-b430-dd09d30cc6b0" (UID: "b62a93b2-b391-4a7f-b430-dd09d30cc6b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.213338 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-config-data" (OuterVolumeSpecName: "config-data") pod "b62a93b2-b391-4a7f-b430-dd09d30cc6b0" (UID: "b62a93b2-b391-4a7f-b430-dd09d30cc6b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.260491 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.260996 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xpgb\" (UniqueName: \"kubernetes.io/projected/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-kube-api-access-5xpgb\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.261007 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zcg6\" (UniqueName: \"kubernetes.io/projected/c619c392-3f57-4cf2-9f7b-880c8f672365-kube-api-access-8zcg6\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.261016 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b62a93b2-b391-4a7f-b430-dd09d30cc6b0-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.261026 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffp6p\" (UniqueName: \"kubernetes.io/projected/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae-kube-api-access-ffp6p\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.606300 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0698-account-create-update-7r5n2" event={"ID":"b733e0fd-d745-40bf-be43-1b3fdfa9d1ae","Type":"ContainerDied","Data":"fd1aaeca83cc4f7c90b4a9b7d35a7c27154dff3d1b713ac03fb8692cb6abfd97"} Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.606365 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd1aaeca83cc4f7c90b4a9b7d35a7c27154dff3d1b713ac03fb8692cb6abfd97" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.606324 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0698-account-create-update-7r5n2" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.608042 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-k7vwh" event={"ID":"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e","Type":"ContainerStarted","Data":"3c4d336a5bbe2d06203cd10f04946393130b4c9e1ba60f418860e5921a36735b"} Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.609508 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rkf9k" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.609564 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rkf9k" event={"ID":"c619c392-3f57-4cf2-9f7b-880c8f672365","Type":"ContainerDied","Data":"27f0269c4dd52c730889e1ba7217f16c70b5afa0f9ad17c79b10ef2627f96aba"} Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.609602 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27f0269c4dd52c730889e1ba7217f16c70b5afa0f9ad17c79b10ef2627f96aba" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.611212 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhcqk" event={"ID":"b62a93b2-b391-4a7f-b430-dd09d30cc6b0","Type":"ContainerDied","Data":"34703777dea14ea3547626545d518f1959d2d3bb3015c518fe68cbc58aa35b0a"} Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.611237 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34703777dea14ea3547626545d518f1959d2d3bb3015c518fe68cbc58aa35b0a" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.611328 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhcqk" Mar 21 09:18:37 crc kubenswrapper[4932]: I0321 09:18:37.642398 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-k7vwh" podStartSLOduration=4.720275154 podStartE2EDuration="12.642368774s" podCreationTimestamp="2026-03-21 09:18:25 +0000 UTC" firstStartedPulling="2026-03-21 09:18:29.072711138 +0000 UTC m=+1212.667909397" lastFinishedPulling="2026-03-21 09:18:36.994804748 +0000 UTC m=+1220.590003017" observedRunningTime="2026-03-21 09:18:37.626465572 +0000 UTC m=+1221.221663921" watchObservedRunningTime="2026-03-21 09:18:37.642368774 +0000 UTC m=+1221.237567063" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270431 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bdfdfdf5-x4nwh"] Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270850 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa143683-f786-4613-aed7-95a17c40f484" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270861 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa143683-f786-4613-aed7-95a17c40f484" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270870 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c619c392-3f57-4cf2-9f7b-880c8f672365" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270876 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c619c392-3f57-4cf2-9f7b-880c8f672365" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270892 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" containerName="init" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270899 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" containerName="init" Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270909 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" containerName="dnsmasq-dns" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270914 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" containerName="dnsmasq-dns" Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270924 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd399df-1028-4cf8-bf73-307464772e8a" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270930 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd399df-1028-4cf8-bf73-307464772e8a" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270940 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4359dfaa-1096-47af-a540-db559c28d15e" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270946 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4359dfaa-1096-47af-a540-db559c28d15e" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270956 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b733e0fd-d745-40bf-be43-1b3fdfa9d1ae" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270962 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b733e0fd-d745-40bf-be43-1b3fdfa9d1ae" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270977 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270983 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: E0321 09:18:38.270991 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62a93b2-b391-4a7f-b430-dd09d30cc6b0" containerName="keystone-db-sync" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.270998 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62a93b2-b391-4a7f-b430-dd09d30cc6b0" containerName="keystone-db-sync" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.271151 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa143683-f786-4613-aed7-95a17c40f484" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.271185 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2d9a5c-8ffe-4f2d-a407-2f211230c4e0" containerName="dnsmasq-dns" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.271199 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c619c392-3f57-4cf2-9f7b-880c8f672365" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.271210 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.271221 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd399df-1028-4cf8-bf73-307464772e8a" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.271232 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4359dfaa-1096-47af-a540-db559c28d15e" containerName="mariadb-database-create" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.271242 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62a93b2-b391-4a7f-b430-dd09d30cc6b0" containerName="keystone-db-sync" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.271255 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b733e0fd-d745-40bf-be43-1b3fdfa9d1ae" containerName="mariadb-account-create-update" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.272337 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.299383 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bdfdfdf5-x4nwh"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.322114 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tp2fk"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.323334 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.326911 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.327207 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.327956 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.328120 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fkdzk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.328466 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.383552 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tp2fk"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.387257 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5plks\" (UniqueName: \"kubernetes.io/projected/cc4b2568-70c2-423f-9c64-170c3d6dd610-kube-api-access-5plks\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.387412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-svc\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.390663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-nb\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.391098 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-sb\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.391147 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-config\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.391184 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-swift-storage-0\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.463365 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7dc94d857-qzbl4"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.465308 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.471332 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.471561 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.471673 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-lfhwb" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.471794 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.493146 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dc94d857-qzbl4"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494393 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-config-data\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494510 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-sb\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494549 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-config\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494581 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-swift-storage-0\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494617 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5plks\" (UniqueName: \"kubernetes.io/projected/cc4b2568-70c2-423f-9c64-170c3d6dd610-kube-api-access-5plks\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494656 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc7nj\" (UniqueName: \"kubernetes.io/projected/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-kube-api-access-tc7nj\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494690 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-svc\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494769 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-combined-ca-bundle\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494815 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-nb\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.494866 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-scripts\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.496124 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-svc\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.510388 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-sb\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.510429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-nb\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.510655 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-fernet-keys\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.511010 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-credential-keys\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.511039 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-config\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.518403 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-swift-storage-0\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.536227 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vz4hd"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.542940 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.561458 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5plks\" (UniqueName: \"kubernetes.io/projected/cc4b2568-70c2-423f-9c64-170c3d6dd610-kube-api-access-5plks\") pod \"dnsmasq-dns-84bdfdfdf5-x4nwh\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.562335 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.569718 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.576690 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x4frd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.577458 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vz4hd"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.608811 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615398 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-horizon-secret-key\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-combined-ca-bundle\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615585 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-scripts\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615652 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-scripts\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615702 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-config-data\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-fernet-keys\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-credential-keys\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615842 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-config-data\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615867 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-logs\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615916 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2j6r\" (UniqueName: \"kubernetes.io/projected/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-kube-api-access-b2j6r\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.615970 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc7nj\" (UniqueName: \"kubernetes.io/projected/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-kube-api-access-tc7nj\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.628079 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-combined-ca-bundle\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.631270 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-fernet-keys\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.631448 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-config-data\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.643101 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-credential-keys\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.654153 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-scripts\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.689465 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc7nj\" (UniqueName: \"kubernetes.io/projected/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-kube-api-access-tc7nj\") pod \"keystone-bootstrap-tp2fk\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.694612 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.708987 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kcpjh"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-logs\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723425 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2j6r\" (UniqueName: \"kubernetes.io/projected/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-kube-api-access-b2j6r\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723472 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-combined-ca-bundle\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723504 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-horizon-secret-key\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723531 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-config-data\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37076824-e8b6-4b75-aea4-f463d7e50613-etc-machine-id\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723611 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-db-sync-config-data\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723638 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-scripts\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-scripts\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723726 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-config-data\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.723760 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2c8\" (UniqueName: \"kubernetes.io/projected/37076824-e8b6-4b75-aea4-f463d7e50613-kube-api-access-zg2c8\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.724698 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-scripts\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.724833 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-logs\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.726686 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-config-data\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.727846 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.733805 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.741763 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-horizon-secret-key\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.744259 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.749339 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8k82n" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.753054 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.757174 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.757690 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.770431 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kcpjh"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.803099 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bfc8fff89-rv95c"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.804568 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2j6r\" (UniqueName: \"kubernetes.io/projected/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-kube-api-access-b2j6r\") pod \"horizon-7dc94d857-qzbl4\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.805281 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.827737 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828297 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-combined-ca-bundle\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828381 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-config-data\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828410 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37076824-e8b6-4b75-aea4-f463d7e50613-etc-machine-id\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828442 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-db-sync-config-data\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828466 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hpm\" (UniqueName: \"kubernetes.io/projected/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-kube-api-access-z5hpm\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828486 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-scripts\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828540 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-combined-ca-bundle\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828568 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg2c8\" (UniqueName: \"kubernetes.io/projected/37076824-e8b6-4b75-aea4-f463d7e50613-kube-api-access-zg2c8\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.828611 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-db-sync-config-data\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.829241 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37076824-e8b6-4b75-aea4-f463d7e50613-etc-machine-id\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.848146 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-combined-ca-bundle\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.848216 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bfc8fff89-rv95c"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.853321 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-config-data\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.853407 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.853731 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-db-sync-config-data\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.874646 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.876116 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.880780 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.886133 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pn4p2" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.886342 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.886840 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.899954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-scripts\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.899956 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg2c8\" (UniqueName: \"kubernetes.io/projected/37076824-e8b6-4b75-aea4-f463d7e50613-kube-api-access-zg2c8\") pod \"cinder-db-sync-vz4hd\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.912636 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.929586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5hpm\" (UniqueName: \"kubernetes.io/projected/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-kube-api-access-z5hpm\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.930132 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-log-httpd\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.930280 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-config-data\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.930415 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.930539 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-combined-ca-bundle\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.930664 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.931292 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-db-sync-config-data\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.931693 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-scripts\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.931785 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-run-httpd\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.931858 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-scripts\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.931962 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xlcn\" (UniqueName: \"kubernetes.io/projected/02ab16ee-8108-498f-8450-bb82bf6ce347-kube-api-access-4xlcn\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.932038 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-logs\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.932127 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-config-data\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.932266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-horizon-secret-key\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.931636 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bdfdfdf5-x4nwh"] Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.943137 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-db-sync-config-data\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.943601 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-combined-ca-bundle\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:38 crc kubenswrapper[4932]: I0321 09:18:38.946684 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzfp4\" (UniqueName: \"kubernetes.io/projected/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-kube-api-access-mzfp4\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.003261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5hpm\" (UniqueName: \"kubernetes.io/projected/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-kube-api-access-z5hpm\") pod \"barbican-db-sync-kcpjh\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.040959 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-h2dqh"] Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.043794 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.053298 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sxwdd" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.055525 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.055581 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.104882 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-horizon-secret-key\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.106908 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzfp4\" (UniqueName: \"kubernetes.io/projected/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-kube-api-access-mzfp4\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.106973 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-logs\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107021 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107067 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-log-httpd\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-config-data\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107186 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107215 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgm2r\" (UniqueName: \"kubernetes.io/projected/046a48be-a77b-443a-b40d-c94a3c6bb34b-kube-api-access-wgm2r\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107299 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107329 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-config-data\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-scripts\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107441 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-run-httpd\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107482 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-scripts\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107500 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xlcn\" (UniqueName: \"kubernetes.io/projected/02ab16ee-8108-498f-8450-bb82bf6ce347-kube-api-access-4xlcn\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107529 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-logs\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-config-data\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.107734 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-scripts\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.108670 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-scripts\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.058998 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.108996 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-log-httpd\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.112219 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-run-httpd\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.113774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-logs\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.115179 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-config-data\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.115600 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-horizon-secret-key\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.116163 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.118475 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-scripts\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.118831 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.130661 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-config-data\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.131120 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.135834 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xlcn\" (UniqueName: \"kubernetes.io/projected/02ab16ee-8108-498f-8450-bb82bf6ce347-kube-api-access-4xlcn\") pod \"ceilometer-0\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.142948 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-h2dqh"] Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.144621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzfp4\" (UniqueName: \"kubernetes.io/projected/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-kube-api-access-mzfp4\") pod \"horizon-bfc8fff89-rv95c\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.149321 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.203553 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cbfddf68f-8mwxb"] Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.205933 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212497 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-combined-ca-bundle\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212536 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212599 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212622 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212646 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-scripts\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212698 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-logs\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212718 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-logs\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212739 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzk6\" (UniqueName: \"kubernetes.io/projected/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-kube-api-access-npzk6\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212770 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212834 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgm2r\" (UniqueName: \"kubernetes.io/projected/046a48be-a77b-443a-b40d-c94a3c6bb34b-kube-api-access-wgm2r\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212886 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-config-data\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212928 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-scripts\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.212981 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-config-data\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.216260 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-logs\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.216584 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.233271 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.234673 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.241418 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-scripts\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.263076 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cbfddf68f-8mwxb"] Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.263489 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgm2r\" (UniqueName: \"kubernetes.io/projected/046a48be-a77b-443a-b40d-c94a3c6bb34b-kube-api-access-wgm2r\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.266040 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-config-data\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.266669 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.275304 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.315673 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-config-data\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.315747 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-combined-ca-bundle\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.315807 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-config\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.315903 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-logs\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.315932 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npzk6\" (UniqueName: \"kubernetes.io/projected/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-kube-api-access-npzk6\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.315956 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.315981 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.316084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-swift-storage-0\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.317182 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-logs\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.317257 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8xf\" (UniqueName: \"kubernetes.io/projected/829e124a-b451-4e8c-9d46-594c51a71418-kube-api-access-ps8xf\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.317301 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-scripts\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.317331 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-svc\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.322026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-combined-ca-bundle\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.322866 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-config-data\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.323615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-scripts\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.344375 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npzk6\" (UniqueName: \"kubernetes.io/projected/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-kube-api-access-npzk6\") pod \"placement-db-sync-h2dqh\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.413637 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.420064 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.420103 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.420174 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-swift-storage-0\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.420199 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps8xf\" (UniqueName: \"kubernetes.io/projected/829e124a-b451-4e8c-9d46-594c51a71418-kube-api-access-ps8xf\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.420219 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-svc\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.420272 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-config\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.421132 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-config\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.421640 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.422193 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.422830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-swift-storage-0\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.423788 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-svc\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.444415 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps8xf\" (UniqueName: \"kubernetes.io/projected/829e124a-b451-4e8c-9d46-594c51a71418-kube-api-access-ps8xf\") pod \"dnsmasq-dns-5cbfddf68f-8mwxb\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.451404 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.464777 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h2dqh" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.519567 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tp2fk"] Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.530779 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bdfdfdf5-x4nwh"] Mar 21 09:18:39 crc kubenswrapper[4932]: W0321 09:18:39.536176 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc4b2568_70c2_423f_9c64_170c3d6dd610.slice/crio-ab7c22a9166c411b5385dee598602d9eed4f1c83e332bbdddb2c7d8d6b65255b WatchSource:0}: Error finding container ab7c22a9166c411b5385dee598602d9eed4f1c83e332bbdddb2c7d8d6b65255b: Status 404 returned error can't find the container with id ab7c22a9166c411b5385dee598602d9eed4f1c83e332bbdddb2c7d8d6b65255b Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.581841 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.594655 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.596956 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.599485 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.599867 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.651421 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.732889 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.732945 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.732998 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.733078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.733120 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.733136 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.733180 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbk9c\" (UniqueName: \"kubernetes.io/projected/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-kube-api-access-hbk9c\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.733216 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.740060 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" event={"ID":"cc4b2568-70c2-423f-9c64-170c3d6dd610","Type":"ContainerStarted","Data":"ab7c22a9166c411b5385dee598602d9eed4f1c83e332bbdddb2c7d8d6b65255b"} Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.740188 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dc94d857-qzbl4"] Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.740260 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tp2fk" event={"ID":"4bee703b-c944-4a5b-9ed7-2e1aaa74292c","Type":"ContainerStarted","Data":"01ca7ae4ff7a0b78b1dfb29455316d6deda062e7b2cf390c6073c5595c466893"} Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.834450 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.834530 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.834610 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.834654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.834759 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.834778 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.834806 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbk9c\" (UniqueName: \"kubernetes.io/projected/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-kube-api-access-hbk9c\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.834825 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.835291 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-logs\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.835548 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.838485 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.878775 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.879137 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.885961 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.886752 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.887741 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbk9c\" (UniqueName: \"kubernetes.io/projected/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-kube-api-access-hbk9c\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:39 crc kubenswrapper[4932]: I0321 09:18:39.968340 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.116283 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vz4hd"] Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.183889 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.238766 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.353375 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kcpjh"] Mar 21 09:18:40 crc kubenswrapper[4932]: W0321 09:18:40.370977 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod512ddfd8_6f62_4c46_b3c1_7b0d478e7a5a.slice/crio-d0172cca35ebf6643837d90d464b27054ed06f389fb143cff0b54d0ce13b9afc WatchSource:0}: Error finding container d0172cca35ebf6643837d90d464b27054ed06f389fb143cff0b54d0ce13b9afc: Status 404 returned error can't find the container with id d0172cca35ebf6643837d90d464b27054ed06f389fb143cff0b54d0ce13b9afc Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.744678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc94d857-qzbl4" event={"ID":"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da","Type":"ContainerStarted","Data":"1998f91697b50d767c185915dc37fda47f6e38bdced680305fa42955a425b93d"} Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.749822 4932 generic.go:334] "Generic (PLEG): container finished" podID="cc4b2568-70c2-423f-9c64-170c3d6dd610" containerID="dd98f91e77cd98cd178788649cb975f6ba744bf8b3318bda992c689212f9f819" exitCode=0 Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.749917 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" event={"ID":"cc4b2568-70c2-423f-9c64-170c3d6dd610","Type":"ContainerDied","Data":"dd98f91e77cd98cd178788649cb975f6ba744bf8b3318bda992c689212f9f819"} Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.752592 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerStarted","Data":"2bbf327d00f6879add54437bc7908f92f7ba7ed7d2588e47714f525b0c3159f5"} Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.754341 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vz4hd" event={"ID":"37076824-e8b6-4b75-aea4-f463d7e50613","Type":"ContainerStarted","Data":"b95f64f58609eaf5cdf5a8ce086c6090a0858becddc53aa8d503d25959dc0e42"} Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.761972 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tp2fk" event={"ID":"4bee703b-c944-4a5b-9ed7-2e1aaa74292c","Type":"ContainerStarted","Data":"bc8fe310a3f81989e870f3cca4f6e5523a75b5b8fa09b2ea7ffad2deafb5f531"} Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.769262 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kcpjh" event={"ID":"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a","Type":"ContainerStarted","Data":"d0172cca35ebf6643837d90d464b27054ed06f389fb143cff0b54d0ce13b9afc"} Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.805190 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-h2dqh"] Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.834307 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bfc8fff89-rv95c"] Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.853691 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tp2fk" podStartSLOduration=2.853669751 podStartE2EDuration="2.853669751s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:40.829242345 +0000 UTC m=+1224.424440614" watchObservedRunningTime="2026-03-21 09:18:40.853669751 +0000 UTC m=+1224.448868020" Mar 21 09:18:40 crc kubenswrapper[4932]: W0321 09:18:40.878274 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4751c71_36b6_4a0c_a5b6_3bdbce6ce42b.slice/crio-ca4d2e591fc8561d6f6ea646c5fa65cb873a9122e2725f8a05612ee8daf95b8b WatchSource:0}: Error finding container ca4d2e591fc8561d6f6ea646c5fa65cb873a9122e2725f8a05612ee8daf95b8b: Status 404 returned error can't find the container with id ca4d2e591fc8561d6f6ea646c5fa65cb873a9122e2725f8a05612ee8daf95b8b Mar 21 09:18:40 crc kubenswrapper[4932]: W0321 09:18:40.935012 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd63fc3d6_b6f3_44a8_b251_9dda2e82ed3a.slice/crio-c58f397ff3d37c1c7d09eb59ff18d266a00bd82682101922174dfd8c049e0bd6 WatchSource:0}: Error finding container c58f397ff3d37c1c7d09eb59ff18d266a00bd82682101922174dfd8c049e0bd6: Status 404 returned error can't find the container with id c58f397ff3d37c1c7d09eb59ff18d266a00bd82682101922174dfd8c049e0bd6 Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.964051 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.973616 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cbfddf68f-8mwxb"] Mar 21 09:18:40 crc kubenswrapper[4932]: I0321 09:18:40.991170 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.002936 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dc94d857-qzbl4"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.085406 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c74d85757-kjlwf"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.087104 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.096830 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.125991 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c74d85757-kjlwf"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.151104 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.163724 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-nbzvp"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.165004 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.179061 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nbzvp"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.179974 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8adb280-44b6-4fb9-b358-aa75af003a44-horizon-secret-key\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.180042 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8adb280-44b6-4fb9-b358-aa75af003a44-logs\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.180078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbz7s\" (UniqueName: \"kubernetes.io/projected/e8adb280-44b6-4fb9-b358-aa75af003a44-kube-api-access-zbz7s\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.180108 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-scripts\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.180213 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-config-data\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.180734 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.180937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.181069 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-857tb" Mar 21 09:18:41 crc kubenswrapper[4932]: W0321 09:18:41.245856 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7768d15_7d06_4d42_a33d_0f4b9113fe6b.slice/crio-58435cd7f5c88f326b0468d95856fbb128eea7e486ae1fe2da9d7045cd833bb9 WatchSource:0}: Error finding container 58435cd7f5c88f326b0468d95856fbb128eea7e486ae1fe2da9d7045cd833bb9: Status 404 returned error can't find the container with id 58435cd7f5c88f326b0468d95856fbb128eea7e486ae1fe2da9d7045cd833bb9 Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.248031 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.285016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6596\" (UniqueName: \"kubernetes.io/projected/faee8175-5928-4824-be91-e0da3c01b71a-kube-api-access-p6596\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.285141 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8adb280-44b6-4fb9-b358-aa75af003a44-horizon-secret-key\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.285172 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-config\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.285221 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-combined-ca-bundle\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.285255 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8adb280-44b6-4fb9-b358-aa75af003a44-logs\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.285299 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbz7s\" (UniqueName: \"kubernetes.io/projected/e8adb280-44b6-4fb9-b358-aa75af003a44-kube-api-access-zbz7s\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.285343 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-scripts\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.285391 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-config-data\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.286644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-config-data\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.288491 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8adb280-44b6-4fb9-b358-aa75af003a44-logs\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.289204 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-scripts\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.292895 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8adb280-44b6-4fb9-b358-aa75af003a44-horizon-secret-key\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.316724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbz7s\" (UniqueName: \"kubernetes.io/projected/e8adb280-44b6-4fb9-b358-aa75af003a44-kube-api-access-zbz7s\") pod \"horizon-5c74d85757-kjlwf\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.386961 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6596\" (UniqueName: \"kubernetes.io/projected/faee8175-5928-4824-be91-e0da3c01b71a-kube-api-access-p6596\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.387037 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-config\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.387075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-combined-ca-bundle\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.401273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-combined-ca-bundle\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.404885 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-config\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.404961 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6596\" (UniqueName: \"kubernetes.io/projected/faee8175-5928-4824-be91-e0da3c01b71a-kube-api-access-p6596\") pod \"neutron-db-sync-nbzvp\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.429681 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.490019 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-sb\") pod \"cc4b2568-70c2-423f-9c64-170c3d6dd610\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.490419 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-nb\") pod \"cc4b2568-70c2-423f-9c64-170c3d6dd610\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.490624 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-swift-storage-0\") pod \"cc4b2568-70c2-423f-9c64-170c3d6dd610\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.490746 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-svc\") pod \"cc4b2568-70c2-423f-9c64-170c3d6dd610\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.490820 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5plks\" (UniqueName: \"kubernetes.io/projected/cc4b2568-70c2-423f-9c64-170c3d6dd610-kube-api-access-5plks\") pod \"cc4b2568-70c2-423f-9c64-170c3d6dd610\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.490969 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-config\") pod \"cc4b2568-70c2-423f-9c64-170c3d6dd610\" (UID: \"cc4b2568-70c2-423f-9c64-170c3d6dd610\") " Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.494847 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4b2568-70c2-423f-9c64-170c3d6dd610-kube-api-access-5plks" (OuterVolumeSpecName: "kube-api-access-5plks") pod "cc4b2568-70c2-423f-9c64-170c3d6dd610" (UID: "cc4b2568-70c2-423f-9c64-170c3d6dd610"). InnerVolumeSpecName "kube-api-access-5plks". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.505499 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.511168 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc4b2568-70c2-423f-9c64-170c3d6dd610" (UID: "cc4b2568-70c2-423f-9c64-170c3d6dd610"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.511239 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc4b2568-70c2-423f-9c64-170c3d6dd610" (UID: "cc4b2568-70c2-423f-9c64-170c3d6dd610"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.518964 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc4b2568-70c2-423f-9c64-170c3d6dd610" (UID: "cc4b2568-70c2-423f-9c64-170c3d6dd610"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.520445 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc4b2568-70c2-423f-9c64-170c3d6dd610" (UID: "cc4b2568-70c2-423f-9c64-170c3d6dd610"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.524760 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.552008 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-config" (OuterVolumeSpecName: "config") pod "cc4b2568-70c2-423f-9c64-170c3d6dd610" (UID: "cc4b2568-70c2-423f-9c64-170c3d6dd610"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.592774 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.592842 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.592885 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.592900 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.592909 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5plks\" (UniqueName: \"kubernetes.io/projected/cc4b2568-70c2-423f-9c64-170c3d6dd610-kube-api-access-5plks\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.592917 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc4b2568-70c2-423f-9c64-170c3d6dd610-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.823680 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"046a48be-a77b-443a-b40d-c94a3c6bb34b","Type":"ContainerStarted","Data":"ef5b133c97b5d464c81630055d3be54666ce5275fe296e12a9855b0506296a45"} Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.825494 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bfc8fff89-rv95c" event={"ID":"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a","Type":"ContainerStarted","Data":"c58f397ff3d37c1c7d09eb59ff18d266a00bd82682101922174dfd8c049e0bd6"} Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.827154 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" event={"ID":"cc4b2568-70c2-423f-9c64-170c3d6dd610","Type":"ContainerDied","Data":"ab7c22a9166c411b5385dee598602d9eed4f1c83e332bbdddb2c7d8d6b65255b"} Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.827182 4932 scope.go:117] "RemoveContainer" containerID="dd98f91e77cd98cd178788649cb975f6ba744bf8b3318bda992c689212f9f819" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.827288 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bdfdfdf5-x4nwh" Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.830159 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h2dqh" event={"ID":"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b","Type":"ContainerStarted","Data":"ca4d2e591fc8561d6f6ea646c5fa65cb873a9122e2725f8a05612ee8daf95b8b"} Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.832555 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7768d15-7d06-4d42-a33d-0f4b9113fe6b","Type":"ContainerStarted","Data":"58435cd7f5c88f326b0468d95856fbb128eea7e486ae1fe2da9d7045cd833bb9"} Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.859570 4932 generic.go:334] "Generic (PLEG): container finished" podID="829e124a-b451-4e8c-9d46-594c51a71418" containerID="b712e45353e6a068788c30c74d1af34c5718a8200cfa2c1d7c74e904c142c92d" exitCode=0 Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.860454 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" event={"ID":"829e124a-b451-4e8c-9d46-594c51a71418","Type":"ContainerDied","Data":"b712e45353e6a068788c30c74d1af34c5718a8200cfa2c1d7c74e904c142c92d"} Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.860486 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" event={"ID":"829e124a-b451-4e8c-9d46-594c51a71418","Type":"ContainerStarted","Data":"7fb811bc183161bdcd2298e58dab138b3c3d03aedbc810eac42bb304cc029633"} Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.913182 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bdfdfdf5-x4nwh"] Mar 21 09:18:41 crc kubenswrapper[4932]: I0321 09:18:41.934288 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bdfdfdf5-x4nwh"] Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.086039 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c74d85757-kjlwf"] Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.321996 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nbzvp"] Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.930863 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" event={"ID":"829e124a-b451-4e8c-9d46-594c51a71418","Type":"ContainerStarted","Data":"6490c90c4c95cc0baaa183d6485faed9922b5af04aede6033b4a83168b162411"} Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.931452 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.937909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nbzvp" event={"ID":"faee8175-5928-4824-be91-e0da3c01b71a","Type":"ContainerStarted","Data":"60a38082968445e049ea86f13a7416ccb1a55b1b20b546ceb19b713d45309991"} Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.937954 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nbzvp" event={"ID":"faee8175-5928-4824-be91-e0da3c01b71a","Type":"ContainerStarted","Data":"818221d5b1f1fd574ced2870fab054b07974ae75ffb4ccbc223c1beef4ebee45"} Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.959565 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" podStartSLOduration=4.959551439 podStartE2EDuration="4.959551439s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:42.955238555 +0000 UTC m=+1226.550436824" watchObservedRunningTime="2026-03-21 09:18:42.959551439 +0000 UTC m=+1226.554749708" Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.974848 4932 generic.go:334] "Generic (PLEG): container finished" podID="1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" containerID="3c4d336a5bbe2d06203cd10f04946393130b4c9e1ba60f418860e5921a36735b" exitCode=0 Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.974944 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-k7vwh" event={"ID":"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e","Type":"ContainerDied","Data":"3c4d336a5bbe2d06203cd10f04946393130b4c9e1ba60f418860e5921a36735b"} Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.979279 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-nbzvp" podStartSLOduration=1.979258179 podStartE2EDuration="1.979258179s" podCreationTimestamp="2026-03-21 09:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:42.976079901 +0000 UTC m=+1226.571278170" watchObservedRunningTime="2026-03-21 09:18:42.979258179 +0000 UTC m=+1226.574456448" Mar 21 09:18:42 crc kubenswrapper[4932]: I0321 09:18:42.982243 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"046a48be-a77b-443a-b40d-c94a3c6bb34b","Type":"ContainerStarted","Data":"77b5a0246f496d6b30f7e1f49de718ee5bcb6edc0034140f1edc7488d6ac63d1"} Mar 21 09:18:43 crc kubenswrapper[4932]: I0321 09:18:43.002475 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c74d85757-kjlwf" event={"ID":"e8adb280-44b6-4fb9-b358-aa75af003a44","Type":"ContainerStarted","Data":"333601fa523ccfb7121cb5034efe3a6e1d2470fae8418c3ce83653e938998446"} Mar 21 09:18:43 crc kubenswrapper[4932]: I0321 09:18:43.006880 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7768d15-7d06-4d42-a33d-0f4b9113fe6b","Type":"ContainerStarted","Data":"6825e77cd1f92edb42c5fcbdb87f997ba1ef386603eea11a7e369fe543582f30"} Mar 21 09:18:43 crc kubenswrapper[4932]: I0321 09:18:43.719270 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4b2568-70c2-423f-9c64-170c3d6dd610" path="/var/lib/kubelet/pods/cc4b2568-70c2-423f-9c64-170c3d6dd610/volumes" Mar 21 09:18:44 crc kubenswrapper[4932]: I0321 09:18:44.023416 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"046a48be-a77b-443a-b40d-c94a3c6bb34b","Type":"ContainerStarted","Data":"4d5c3438385fad79e4841b944a2b7573bd9d689f5005a07dc6ee241bdff0269d"} Mar 21 09:18:44 crc kubenswrapper[4932]: I0321 09:18:44.023597 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerName="glance-log" containerID="cri-o://77b5a0246f496d6b30f7e1f49de718ee5bcb6edc0034140f1edc7488d6ac63d1" gracePeriod=30 Mar 21 09:18:44 crc kubenswrapper[4932]: I0321 09:18:44.023613 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerName="glance-httpd" containerID="cri-o://4d5c3438385fad79e4841b944a2b7573bd9d689f5005a07dc6ee241bdff0269d" gracePeriod=30 Mar 21 09:18:44 crc kubenswrapper[4932]: I0321 09:18:44.026131 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerName="glance-log" containerID="cri-o://6825e77cd1f92edb42c5fcbdb87f997ba1ef386603eea11a7e369fe543582f30" gracePeriod=30 Mar 21 09:18:44 crc kubenswrapper[4932]: I0321 09:18:44.026217 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerName="glance-httpd" containerID="cri-o://f1381b54addd3f3d69a7babc6def920636f223b1b9cf888ae4126ef6a122ff6c" gracePeriod=30 Mar 21 09:18:44 crc kubenswrapper[4932]: I0321 09:18:44.026909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7768d15-7d06-4d42-a33d-0f4b9113fe6b","Type":"ContainerStarted","Data":"f1381b54addd3f3d69a7babc6def920636f223b1b9cf888ae4126ef6a122ff6c"} Mar 21 09:18:44 crc kubenswrapper[4932]: I0321 09:18:44.065287 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.065263227 podStartE2EDuration="6.065263227s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:44.046723472 +0000 UTC m=+1227.641921761" watchObservedRunningTime="2026-03-21 09:18:44.065263227 +0000 UTC m=+1227.660461496" Mar 21 09:18:44 crc kubenswrapper[4932]: I0321 09:18:44.079841 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.079826538 podStartE2EDuration="6.079826538s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:18:44.079095715 +0000 UTC m=+1227.674293984" watchObservedRunningTime="2026-03-21 09:18:44.079826538 +0000 UTC m=+1227.675024807" Mar 21 09:18:45 crc kubenswrapper[4932]: I0321 09:18:45.038465 4932 generic.go:334] "Generic (PLEG): container finished" podID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerID="f1381b54addd3f3d69a7babc6def920636f223b1b9cf888ae4126ef6a122ff6c" exitCode=0 Mar 21 09:18:45 crc kubenswrapper[4932]: I0321 09:18:45.039021 4932 generic.go:334] "Generic (PLEG): container finished" podID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerID="6825e77cd1f92edb42c5fcbdb87f997ba1ef386603eea11a7e369fe543582f30" exitCode=143 Mar 21 09:18:45 crc kubenswrapper[4932]: I0321 09:18:45.038543 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7768d15-7d06-4d42-a33d-0f4b9113fe6b","Type":"ContainerDied","Data":"f1381b54addd3f3d69a7babc6def920636f223b1b9cf888ae4126ef6a122ff6c"} Mar 21 09:18:45 crc kubenswrapper[4932]: I0321 09:18:45.039070 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7768d15-7d06-4d42-a33d-0f4b9113fe6b","Type":"ContainerDied","Data":"6825e77cd1f92edb42c5fcbdb87f997ba1ef386603eea11a7e369fe543582f30"} Mar 21 09:18:45 crc kubenswrapper[4932]: I0321 09:18:45.042015 4932 generic.go:334] "Generic (PLEG): container finished" podID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerID="4d5c3438385fad79e4841b944a2b7573bd9d689f5005a07dc6ee241bdff0269d" exitCode=0 Mar 21 09:18:45 crc kubenswrapper[4932]: I0321 09:18:45.042039 4932 generic.go:334] "Generic (PLEG): container finished" podID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerID="77b5a0246f496d6b30f7e1f49de718ee5bcb6edc0034140f1edc7488d6ac63d1" exitCode=143 Mar 21 09:18:45 crc kubenswrapper[4932]: I0321 09:18:45.042063 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"046a48be-a77b-443a-b40d-c94a3c6bb34b","Type":"ContainerDied","Data":"4d5c3438385fad79e4841b944a2b7573bd9d689f5005a07dc6ee241bdff0269d"} Mar 21 09:18:45 crc kubenswrapper[4932]: I0321 09:18:45.042113 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"046a48be-a77b-443a-b40d-c94a3c6bb34b","Type":"ContainerDied","Data":"77b5a0246f496d6b30f7e1f49de718ee5bcb6edc0034140f1edc7488d6ac63d1"} Mar 21 09:18:46 crc kubenswrapper[4932]: I0321 09:18:46.055342 4932 generic.go:334] "Generic (PLEG): container finished" podID="4bee703b-c944-4a5b-9ed7-2e1aaa74292c" containerID="bc8fe310a3f81989e870f3cca4f6e5523a75b5b8fa09b2ea7ffad2deafb5f531" exitCode=0 Mar 21 09:18:46 crc kubenswrapper[4932]: I0321 09:18:46.055398 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tp2fk" event={"ID":"4bee703b-c944-4a5b-9ed7-2e1aaa74292c","Type":"ContainerDied","Data":"bc8fe310a3f81989e870f3cca4f6e5523a75b5b8fa09b2ea7ffad2deafb5f531"} Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.351006 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bfc8fff89-rv95c"] Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.392548 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-fbf6fd964-2w7xj"] Mar 21 09:18:47 crc kubenswrapper[4932]: E0321 09:18:47.392948 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4b2568-70c2-423f-9c64-170c3d6dd610" containerName="init" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.392967 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4b2568-70c2-423f-9c64-170c3d6dd610" containerName="init" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.393162 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4b2568-70c2-423f-9c64-170c3d6dd610" containerName="init" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.394192 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.402336 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.425662 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-horizon-secret-key\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.425759 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-combined-ca-bundle\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.425786 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8hvw\" (UniqueName: \"kubernetes.io/projected/13285608-51c1-4307-a442-e0cd0e881385-kube-api-access-c8hvw\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.425804 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13285608-51c1-4307-a442-e0cd0e881385-logs\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.425847 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13285608-51c1-4307-a442-e0cd0e881385-config-data\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.425863 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-horizon-tls-certs\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.425887 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13285608-51c1-4307-a442-e0cd0e881385-scripts\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.438756 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fbf6fd964-2w7xj"] Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.485470 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c74d85757-kjlwf"] Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.527075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8hvw\" (UniqueName: \"kubernetes.io/projected/13285608-51c1-4307-a442-e0cd0e881385-kube-api-access-c8hvw\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.527124 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13285608-51c1-4307-a442-e0cd0e881385-logs\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.527173 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-horizon-tls-certs\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.527191 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13285608-51c1-4307-a442-e0cd0e881385-config-data\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.527217 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13285608-51c1-4307-a442-e0cd0e881385-scripts\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.527301 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-horizon-secret-key\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.527383 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-combined-ca-bundle\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.529045 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13285608-51c1-4307-a442-e0cd0e881385-scripts\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.530208 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13285608-51c1-4307-a442-e0cd0e881385-config-data\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.530277 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13285608-51c1-4307-a442-e0cd0e881385-logs\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.545602 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-horizon-secret-key\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.546120 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-combined-ca-bundle\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.558013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13285608-51c1-4307-a442-e0cd0e881385-horizon-tls-certs\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.565513 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8hvw\" (UniqueName: \"kubernetes.io/projected/13285608-51c1-4307-a442-e0cd0e881385-kube-api-access-c8hvw\") pod \"horizon-fbf6fd964-2w7xj\" (UID: \"13285608-51c1-4307-a442-e0cd0e881385\") " pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.575101 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7998c44c8d-kb65g"] Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.576935 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.587058 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7998c44c8d-kb65g"] Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.629933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2137f88-2dc2-4718-bd8d-229745974b9a-config-data\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.629996 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-horizon-tls-certs\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.630763 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2137f88-2dc2-4718-bd8d-229745974b9a-logs\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.630802 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wqxd\" (UniqueName: \"kubernetes.io/projected/a2137f88-2dc2-4718-bd8d-229745974b9a-kube-api-access-5wqxd\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.630827 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-combined-ca-bundle\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.630849 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-horizon-secret-key\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.630873 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2137f88-2dc2-4718-bd8d-229745974b9a-scripts\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.733537 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2137f88-2dc2-4718-bd8d-229745974b9a-config-data\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.733623 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-horizon-tls-certs\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.733731 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2137f88-2dc2-4718-bd8d-229745974b9a-logs\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.733763 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wqxd\" (UniqueName: \"kubernetes.io/projected/a2137f88-2dc2-4718-bd8d-229745974b9a-kube-api-access-5wqxd\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.733784 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-combined-ca-bundle\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.733809 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-horizon-secret-key\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.733837 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2137f88-2dc2-4718-bd8d-229745974b9a-scripts\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.734549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2137f88-2dc2-4718-bd8d-229745974b9a-scripts\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.735447 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2137f88-2dc2-4718-bd8d-229745974b9a-config-data\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.740100 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2137f88-2dc2-4718-bd8d-229745974b9a-logs\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.740142 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.742507 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-horizon-tls-certs\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.744685 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-horizon-secret-key\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.744959 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2137f88-2dc2-4718-bd8d-229745974b9a-combined-ca-bundle\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.758341 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wqxd\" (UniqueName: \"kubernetes.io/projected/a2137f88-2dc2-4718-bd8d-229745974b9a-kube-api-access-5wqxd\") pod \"horizon-7998c44c8d-kb65g\" (UID: \"a2137f88-2dc2-4718-bd8d-229745974b9a\") " pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:47 crc kubenswrapper[4932]: I0321 09:18:47.947331 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.252887 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.268096 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-combined-ca-bundle\") pod \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.268168 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww8hs\" (UniqueName: \"kubernetes.io/projected/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-kube-api-access-ww8hs\") pod \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.268265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-config-data\") pod \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.268333 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-db-sync-config-data\") pod \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\" (UID: \"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e\") " Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.278871 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-kube-api-access-ww8hs" (OuterVolumeSpecName: "kube-api-access-ww8hs") pod "1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" (UID: "1fd481d0-7b76-4ce4-9b88-4f8d37125f7e"). InnerVolumeSpecName "kube-api-access-ww8hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.278917 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" (UID: "1fd481d0-7b76-4ce4-9b88-4f8d37125f7e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.312591 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" (UID: "1fd481d0-7b76-4ce4-9b88-4f8d37125f7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.328163 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-config-data" (OuterVolumeSpecName: "config-data") pod "1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" (UID: "1fd481d0-7b76-4ce4-9b88-4f8d37125f7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.370001 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.370037 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.370046 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.370056 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww8hs\" (UniqueName: \"kubernetes.io/projected/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e-kube-api-access-ww8hs\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.584571 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.673774 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6bbf886c-qvkdt"] Mar 21 09:18:49 crc kubenswrapper[4932]: I0321 09:18:49.677451 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="dnsmasq-dns" containerID="cri-o://c3889e2a0364a53dc1ad530d17d44adac71cdcef9519581b9cd6b90974513c13" gracePeriod=10 Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.096242 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-k7vwh" event={"ID":"1fd481d0-7b76-4ce4-9b88-4f8d37125f7e","Type":"ContainerDied","Data":"77103851da0f9eb60efe8fd0c411a47f56d1635fcea5fbfa885c2b32d30c565b"} Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.096263 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-k7vwh" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.096277 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77103851da0f9eb60efe8fd0c411a47f56d1635fcea5fbfa885c2b32d30c565b" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.098372 4932 generic.go:334] "Generic (PLEG): container finished" podID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerID="c3889e2a0364a53dc1ad530d17d44adac71cdcef9519581b9cd6b90974513c13" exitCode=0 Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.098414 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" event={"ID":"699d7b1e-6190-4000-8035-0a2c288a53f7","Type":"ContainerDied","Data":"c3889e2a0364a53dc1ad530d17d44adac71cdcef9519581b9cd6b90974513c13"} Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.509729 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:18:50 crc kubenswrapper[4932]: E0321 09:18:50.511293 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" containerName="watcher-db-sync" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.511427 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" containerName="watcher-db-sync" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.511784 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" containerName="watcher-db-sync" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.512628 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.528096 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.529029 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.529745 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-fmcmm" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.615568 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.619448 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.621555 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.649872 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.652836 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.670289 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710598 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkww9\" (UniqueName: \"kubernetes.io/projected/95c94c46-fec1-499c-8ae2-aab0899f87df-kube-api-access-qkww9\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710689 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c94c46-fec1-499c-8ae2-aab0899f87df-logs\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710733 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-logs\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710755 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-config-data\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710789 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710873 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-277bh\" (UniqueName: \"kubernetes.io/projected/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-kube-api-access-277bh\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.710914 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.719432 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.777187 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812440 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-277bh\" (UniqueName: \"kubernetes.io/projected/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-kube-api-access-277bh\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812513 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5zf\" (UniqueName: \"kubernetes.io/projected/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-kube-api-access-cz5zf\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812550 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812578 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-config-data\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812623 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkww9\" (UniqueName: \"kubernetes.io/projected/95c94c46-fec1-499c-8ae2-aab0899f87df-kube-api-access-qkww9\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812677 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c94c46-fec1-499c-8ae2-aab0899f87df-logs\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812694 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-logs\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812726 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-logs\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812749 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-config-data\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812765 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.812806 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.815122 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c94c46-fec1-499c-8ae2-aab0899f87df-logs\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.823219 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.823730 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.823982 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-logs\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.828980 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.830064 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-config-data\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.837934 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-277bh\" (UniqueName: \"kubernetes.io/projected/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-kube-api-access-277bh\") pod \"watcher-decision-engine-0\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.858884 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.859740 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.861829 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkww9\" (UniqueName: \"kubernetes.io/projected/95c94c46-fec1-499c-8ae2-aab0899f87df-kube-api-access-qkww9\") pod \"watcher-applier-0\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " pod="openstack/watcher-applier-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.918405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-logs\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.918466 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.918505 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.918586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz5zf\" (UniqueName: \"kubernetes.io/projected/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-kube-api-access-cz5zf\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.918631 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-config-data\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.918771 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-logs\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.928031 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.928670 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-config-data\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.939890 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:50 crc kubenswrapper[4932]: I0321 09:18:50.944898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz5zf\" (UniqueName: \"kubernetes.io/projected/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-kube-api-access-cz5zf\") pod \"watcher-api-0\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " pod="openstack/watcher-api-0" Mar 21 09:18:51 crc kubenswrapper[4932]: I0321 09:18:51.014078 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 21 09:18:51 crc kubenswrapper[4932]: I0321 09:18:51.037077 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:18:53 crc kubenswrapper[4932]: I0321 09:18:53.899102 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Mar 21 09:18:56 crc kubenswrapper[4932]: E0321 09:18:56.704015 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Mar 21 09:18:56 crc kubenswrapper[4932]: E0321 09:18:56.704436 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Mar 21 09:18:56 crc kubenswrapper[4932]: E0321 09:18:56.704548 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.159:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-h2dqh_openstack(d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:18:56 crc kubenswrapper[4932]: E0321 09:18:56.705765 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-h2dqh" podUID="d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" Mar 21 09:18:57 crc kubenswrapper[4932]: E0321 09:18:57.145780 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Mar 21 09:18:57 crc kubenswrapper[4932]: E0321 09:18:57.145850 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Mar 21 09:18:57 crc kubenswrapper[4932]: E0321 09:18:57.145977 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.159:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5hpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-kcpjh_openstack(512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:18:57 crc kubenswrapper[4932]: E0321 09:18:57.152569 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-kcpjh" podUID="512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.177551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"046a48be-a77b-443a-b40d-c94a3c6bb34b","Type":"ContainerDied","Data":"ef5b133c97b5d464c81630055d3be54666ce5275fe296e12a9855b0506296a45"} Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.177591 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5b133c97b5d464c81630055d3be54666ce5275fe296e12a9855b0506296a45" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.179299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tp2fk" event={"ID":"4bee703b-c944-4a5b-9ed7-2e1aaa74292c","Type":"ContainerDied","Data":"01ca7ae4ff7a0b78b1dfb29455316d6deda062e7b2cf390c6073c5595c466893"} Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.179321 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ca7ae4ff7a0b78b1dfb29455316d6deda062e7b2cf390c6073c5595c466893" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.181881 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f7768d15-7d06-4d42-a33d-0f4b9113fe6b","Type":"ContainerDied","Data":"58435cd7f5c88f326b0468d95856fbb128eea7e486ae1fe2da9d7045cd833bb9"} Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.182471 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58435cd7f5c88f326b0468d95856fbb128eea7e486ae1fe2da9d7045cd833bb9" Mar 21 09:18:57 crc kubenswrapper[4932]: E0321 09:18:57.183332 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-kcpjh" podUID="512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" Mar 21 09:18:57 crc kubenswrapper[4932]: E0321 09:18:57.185156 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-h2dqh" podUID="d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.251338 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.277859 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.281265 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.348714 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-credential-keys\") pod \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.349617 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-fernet-keys\") pod \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.351736 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-scripts\") pod \"046a48be-a77b-443a-b40d-c94a3c6bb34b\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.351826 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.351918 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-internal-tls-certs\") pod \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.351971 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-combined-ca-bundle\") pod \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.357467 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4bee703b-c944-4a5b-9ed7-2e1aaa74292c" (UID: "4bee703b-c944-4a5b-9ed7-2e1aaa74292c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.357543 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-scripts" (OuterVolumeSpecName: "scripts") pod "046a48be-a77b-443a-b40d-c94a3c6bb34b" (UID: "046a48be-a77b-443a-b40d-c94a3c6bb34b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.369373 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "f7768d15-7d06-4d42-a33d-0f4b9113fe6b" (UID: "f7768d15-7d06-4d42-a33d-0f4b9113fe6b"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.379001 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4bee703b-c944-4a5b-9ed7-2e1aaa74292c" (UID: "4bee703b-c944-4a5b-9ed7-2e1aaa74292c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.388488 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7768d15-7d06-4d42-a33d-0f4b9113fe6b" (UID: "f7768d15-7d06-4d42-a33d-0f4b9113fe6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.426719 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f7768d15-7d06-4d42-a33d-0f4b9113fe6b" (UID: "f7768d15-7d06-4d42-a33d-0f4b9113fe6b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455123 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-combined-ca-bundle\") pod \"046a48be-a77b-443a-b40d-c94a3c6bb34b\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455189 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-config-data\") pod \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455212 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbk9c\" (UniqueName: \"kubernetes.io/projected/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-kube-api-access-hbk9c\") pod \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455241 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-httpd-run\") pod \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455302 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-config-data\") pod \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455327 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc7nj\" (UniqueName: \"kubernetes.io/projected/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-kube-api-access-tc7nj\") pod \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455368 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-config-data\") pod \"046a48be-a77b-443a-b40d-c94a3c6bb34b\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455396 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-scripts\") pod \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455427 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"046a48be-a77b-443a-b40d-c94a3c6bb34b\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455680 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-public-tls-certs\") pod \"046a48be-a77b-443a-b40d-c94a3c6bb34b\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-combined-ca-bundle\") pod \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455731 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-logs\") pod \"046a48be-a77b-443a-b40d-c94a3c6bb34b\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455758 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-scripts\") pod \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\" (UID: \"4bee703b-c944-4a5b-9ed7-2e1aaa74292c\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455783 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgm2r\" (UniqueName: \"kubernetes.io/projected/046a48be-a77b-443a-b40d-c94a3c6bb34b-kube-api-access-wgm2r\") pod \"046a48be-a77b-443a-b40d-c94a3c6bb34b\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455806 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-httpd-run\") pod \"046a48be-a77b-443a-b40d-c94a3c6bb34b\" (UID: \"046a48be-a77b-443a-b40d-c94a3c6bb34b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.455822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-logs\") pod \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\" (UID: \"f7768d15-7d06-4d42-a33d-0f4b9113fe6b\") " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.456305 4932 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.456329 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.456342 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.456381 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.456394 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.456407 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.459433 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-logs" (OuterVolumeSpecName: "logs") pod "046a48be-a77b-443a-b40d-c94a3c6bb34b" (UID: "046a48be-a77b-443a-b40d-c94a3c6bb34b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.459712 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "046a48be-a77b-443a-b40d-c94a3c6bb34b" (UID: "046a48be-a77b-443a-b40d-c94a3c6bb34b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.460109 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f7768d15-7d06-4d42-a33d-0f4b9113fe6b" (UID: "f7768d15-7d06-4d42-a33d-0f4b9113fe6b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.460383 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-logs" (OuterVolumeSpecName: "logs") pod "f7768d15-7d06-4d42-a33d-0f4b9113fe6b" (UID: "f7768d15-7d06-4d42-a33d-0f4b9113fe6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.460500 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "046a48be-a77b-443a-b40d-c94a3c6bb34b" (UID: "046a48be-a77b-443a-b40d-c94a3c6bb34b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.460850 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-scripts" (OuterVolumeSpecName: "scripts") pod "f7768d15-7d06-4d42-a33d-0f4b9113fe6b" (UID: "f7768d15-7d06-4d42-a33d-0f4b9113fe6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.462505 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-kube-api-access-tc7nj" (OuterVolumeSpecName: "kube-api-access-tc7nj") pod "4bee703b-c944-4a5b-9ed7-2e1aaa74292c" (UID: "4bee703b-c944-4a5b-9ed7-2e1aaa74292c"). InnerVolumeSpecName "kube-api-access-tc7nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.464593 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046a48be-a77b-443a-b40d-c94a3c6bb34b-kube-api-access-wgm2r" (OuterVolumeSpecName: "kube-api-access-wgm2r") pod "046a48be-a77b-443a-b40d-c94a3c6bb34b" (UID: "046a48be-a77b-443a-b40d-c94a3c6bb34b"). InnerVolumeSpecName "kube-api-access-wgm2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.466113 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-scripts" (OuterVolumeSpecName: "scripts") pod "4bee703b-c944-4a5b-9ed7-2e1aaa74292c" (UID: "4bee703b-c944-4a5b-9ed7-2e1aaa74292c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.471027 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-kube-api-access-hbk9c" (OuterVolumeSpecName: "kube-api-access-hbk9c") pod "f7768d15-7d06-4d42-a33d-0f4b9113fe6b" (UID: "f7768d15-7d06-4d42-a33d-0f4b9113fe6b"). InnerVolumeSpecName "kube-api-access-hbk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.481122 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.482917 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "046a48be-a77b-443a-b40d-c94a3c6bb34b" (UID: "046a48be-a77b-443a-b40d-c94a3c6bb34b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.484055 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-config-data" (OuterVolumeSpecName: "config-data") pod "4bee703b-c944-4a5b-9ed7-2e1aaa74292c" (UID: "4bee703b-c944-4a5b-9ed7-2e1aaa74292c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.491573 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bee703b-c944-4a5b-9ed7-2e1aaa74292c" (UID: "4bee703b-c944-4a5b-9ed7-2e1aaa74292c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.513775 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-config-data" (OuterVolumeSpecName: "config-data") pod "046a48be-a77b-443a-b40d-c94a3c6bb34b" (UID: "046a48be-a77b-443a-b40d-c94a3c6bb34b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.514759 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "046a48be-a77b-443a-b40d-c94a3c6bb34b" (UID: "046a48be-a77b-443a-b40d-c94a3c6bb34b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.524393 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-config-data" (OuterVolumeSpecName: "config-data") pod "f7768d15-7d06-4d42-a33d-0f4b9113fe6b" (UID: "f7768d15-7d06-4d42-a33d-0f4b9113fe6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.559953 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560048 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560062 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560074 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgm2r\" (UniqueName: \"kubernetes.io/projected/046a48be-a77b-443a-b40d-c94a3c6bb34b-kube-api-access-wgm2r\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560088 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560123 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/046a48be-a77b-443a-b40d-c94a3c6bb34b-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560136 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560147 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560159 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbk9c\" (UniqueName: \"kubernetes.io/projected/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-kube-api-access-hbk9c\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560169 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560243 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560257 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc7nj\" (UniqueName: \"kubernetes.io/projected/4bee703b-c944-4a5b-9ed7-2e1aaa74292c-kube-api-access-tc7nj\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560268 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560278 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7768d15-7d06-4d42-a33d-0f4b9113fe6b-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560341 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560384 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.560397 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/046a48be-a77b-443a-b40d-c94a3c6bb34b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.592179 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 21 09:18:57 crc kubenswrapper[4932]: I0321 09:18:57.661145 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.190838 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tp2fk" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.190872 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.190838 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.231489 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.241816 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.249173 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.256767 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268058 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:18:58 crc kubenswrapper[4932]: E0321 09:18:58.268585 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerName="glance-httpd" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268603 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerName="glance-httpd" Mar 21 09:18:58 crc kubenswrapper[4932]: E0321 09:18:58.268627 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerName="glance-log" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268634 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerName="glance-log" Mar 21 09:18:58 crc kubenswrapper[4932]: E0321 09:18:58.268644 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bee703b-c944-4a5b-9ed7-2e1aaa74292c" containerName="keystone-bootstrap" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268651 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bee703b-c944-4a5b-9ed7-2e1aaa74292c" containerName="keystone-bootstrap" Mar 21 09:18:58 crc kubenswrapper[4932]: E0321 09:18:58.268661 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerName="glance-httpd" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268667 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerName="glance-httpd" Mar 21 09:18:58 crc kubenswrapper[4932]: E0321 09:18:58.268682 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerName="glance-log" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268688 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerName="glance-log" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268864 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerName="glance-log" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268876 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" containerName="glance-httpd" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268885 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerName="glance-log" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268900 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" containerName="glance-httpd" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.268909 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bee703b-c944-4a5b-9ed7-2e1aaa74292c" containerName="keystone-bootstrap" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.270065 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.273803 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.274155 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.275306 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pn4p2" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.279014 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.280736 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.289003 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.289611 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.289753 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.294762 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.298055 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.356042 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tp2fk"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.364391 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tp2fk"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.454105 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cmdkb"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.455757 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.459921 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.460043 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fkdzk" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.460079 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.460106 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.460162 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.465768 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cmdkb"] Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.473833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.473875 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.473895 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.473931 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.473962 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474002 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474020 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474038 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474057 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vdbv\" (UniqueName: \"kubernetes.io/projected/bb2449ef-1380-4083-87cd-242b41f821ac-kube-api-access-6vdbv\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474080 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqr25\" (UniqueName: \"kubernetes.io/projected/7a277844-3590-4db2-af83-026af5697238-kube-api-access-lqr25\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474102 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474138 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474155 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-logs\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474181 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.474210 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575265 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-fernet-keys\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575308 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c2p4\" (UniqueName: \"kubernetes.io/projected/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-kube-api-access-6c2p4\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575364 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575394 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575418 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vdbv\" (UniqueName: \"kubernetes.io/projected/bb2449ef-1380-4083-87cd-242b41f821ac-kube-api-access-6vdbv\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575480 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqr25\" (UniqueName: \"kubernetes.io/projected/7a277844-3590-4db2-af83-026af5697238-kube-api-access-lqr25\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575511 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575535 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-credential-keys\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575565 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575591 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575612 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-logs\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575637 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-config-data\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575670 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-combined-ca-bundle\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575741 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575781 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-scripts\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575806 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575858 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575902 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.575945 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.576121 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.576457 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.576973 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.577644 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.577717 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-logs\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.577673 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.581862 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.582659 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.585026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.586516 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.592170 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqr25\" (UniqueName: \"kubernetes.io/projected/7a277844-3590-4db2-af83-026af5697238-kube-api-access-lqr25\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.592389 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.596410 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vdbv\" (UniqueName: \"kubernetes.io/projected/bb2449ef-1380-4083-87cd-242b41f821ac-kube-api-access-6vdbv\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.596419 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.596494 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.607123 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.613105 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.625198 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " pod="openstack/glance-default-external-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.678027 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-scripts\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.678124 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-fernet-keys\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.678153 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c2p4\" (UniqueName: \"kubernetes.io/projected/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-kube-api-access-6c2p4\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.678209 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-credential-keys\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.678241 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-config-data\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.678277 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-combined-ca-bundle\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.682261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-fernet-keys\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.682937 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-config-data\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.683563 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-credential-keys\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.683577 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-combined-ca-bundle\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.692595 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-scripts\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.712989 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c2p4\" (UniqueName: \"kubernetes.io/projected/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-kube-api-access-6c2p4\") pod \"keystone-bootstrap-cmdkb\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.777556 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.898710 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:18:58 crc kubenswrapper[4932]: I0321 09:18:58.914260 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:18:59 crc kubenswrapper[4932]: I0321 09:18:59.713423 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046a48be-a77b-443a-b40d-c94a3c6bb34b" path="/var/lib/kubelet/pods/046a48be-a77b-443a-b40d-c94a3c6bb34b/volumes" Mar 21 09:18:59 crc kubenswrapper[4932]: I0321 09:18:59.714513 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bee703b-c944-4a5b-9ed7-2e1aaa74292c" path="/var/lib/kubelet/pods/4bee703b-c944-4a5b-9ed7-2e1aaa74292c/volumes" Mar 21 09:18:59 crc kubenswrapper[4932]: I0321 09:18:59.715402 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7768d15-7d06-4d42-a33d-0f4b9113fe6b" path="/var/lib/kubelet/pods/f7768d15-7d06-4d42-a33d-0f4b9113fe6b/volumes" Mar 21 09:19:00 crc kubenswrapper[4932]: I0321 09:19:00.225913 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:19:00 crc kubenswrapper[4932]: I0321 09:19:00.225982 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:19:03 crc kubenswrapper[4932]: I0321 09:19:03.898821 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: i/o timeout" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.716819 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.835711 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-sb\") pod \"699d7b1e-6190-4000-8035-0a2c288a53f7\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.835857 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-svc\") pod \"699d7b1e-6190-4000-8035-0a2c288a53f7\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.835965 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-nb\") pod \"699d7b1e-6190-4000-8035-0a2c288a53f7\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.836160 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-swift-storage-0\") pod \"699d7b1e-6190-4000-8035-0a2c288a53f7\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.836199 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-config\") pod \"699d7b1e-6190-4000-8035-0a2c288a53f7\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.836420 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t2s2\" (UniqueName: \"kubernetes.io/projected/699d7b1e-6190-4000-8035-0a2c288a53f7-kube-api-access-9t2s2\") pod \"699d7b1e-6190-4000-8035-0a2c288a53f7\" (UID: \"699d7b1e-6190-4000-8035-0a2c288a53f7\") " Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.841328 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699d7b1e-6190-4000-8035-0a2c288a53f7-kube-api-access-9t2s2" (OuterVolumeSpecName: "kube-api-access-9t2s2") pod "699d7b1e-6190-4000-8035-0a2c288a53f7" (UID: "699d7b1e-6190-4000-8035-0a2c288a53f7"). InnerVolumeSpecName "kube-api-access-9t2s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.883024 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "699d7b1e-6190-4000-8035-0a2c288a53f7" (UID: "699d7b1e-6190-4000-8035-0a2c288a53f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.889256 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "699d7b1e-6190-4000-8035-0a2c288a53f7" (UID: "699d7b1e-6190-4000-8035-0a2c288a53f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.890720 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-config" (OuterVolumeSpecName: "config") pod "699d7b1e-6190-4000-8035-0a2c288a53f7" (UID: "699d7b1e-6190-4000-8035-0a2c288a53f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.891495 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "699d7b1e-6190-4000-8035-0a2c288a53f7" (UID: "699d7b1e-6190-4000-8035-0a2c288a53f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.902688 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "699d7b1e-6190-4000-8035-0a2c288a53f7" (UID: "699d7b1e-6190-4000-8035-0a2c288a53f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.940287 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t2s2\" (UniqueName: \"kubernetes.io/projected/699d7b1e-6190-4000-8035-0a2c288a53f7-kube-api-access-9t2s2\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.940327 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.940339 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.940365 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.940377 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:06 crc kubenswrapper[4932]: I0321 09:19:06.940388 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699d7b1e-6190-4000-8035-0a2c288a53f7-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:07 crc kubenswrapper[4932]: I0321 09:19:07.041106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fbf6fd964-2w7xj"] Mar 21 09:19:07 crc kubenswrapper[4932]: I0321 09:19:07.268719 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" event={"ID":"699d7b1e-6190-4000-8035-0a2c288a53f7","Type":"ContainerDied","Data":"c8048acfb9f7f331e50fc9ed6ba544e418e65a1944bba718208e412afdfc4fd4"} Mar 21 09:19:07 crc kubenswrapper[4932]: I0321 09:19:07.268773 4932 scope.go:117] "RemoveContainer" containerID="c3889e2a0364a53dc1ad530d17d44adac71cdcef9519581b9cd6b90974513c13" Mar 21 09:19:07 crc kubenswrapper[4932]: I0321 09:19:07.268874 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" Mar 21 09:19:07 crc kubenswrapper[4932]: I0321 09:19:07.313292 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6bbf886c-qvkdt"] Mar 21 09:19:07 crc kubenswrapper[4932]: I0321 09:19:07.325153 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6bbf886c-qvkdt"] Mar 21 09:19:07 crc kubenswrapper[4932]: I0321 09:19:07.712055 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" path="/var/lib/kubelet/pods/699d7b1e-6190-4000-8035-0a2c288a53f7/volumes" Mar 21 09:19:07 crc kubenswrapper[4932]: W0321 09:19:07.784807 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13285608_51c1_4307_a442_e0cd0e881385.slice/crio-a43890342dbc89184ec76d2cbd47cd7922226a5de33acd0dade0c7b26b6f01cf WatchSource:0}: Error finding container a43890342dbc89184ec76d2cbd47cd7922226a5de33acd0dade0c7b26b6f01cf: Status 404 returned error can't find the container with id a43890342dbc89184ec76d2cbd47cd7922226a5de33acd0dade0c7b26b6f01cf Mar 21 09:19:07 crc kubenswrapper[4932]: E0321 09:19:07.797957 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Mar 21 09:19:07 crc kubenswrapper[4932]: E0321 09:19:07.798010 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.159:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Mar 21 09:19:07 crc kubenswrapper[4932]: E0321 09:19:07.798129 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.159:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg2c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vz4hd_openstack(37076824-e8b6-4b75-aea4-f463d7e50613): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 21 09:19:07 crc kubenswrapper[4932]: E0321 09:19:07.799618 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vz4hd" podUID="37076824-e8b6-4b75-aea4-f463d7e50613" Mar 21 09:19:07 crc kubenswrapper[4932]: I0321 09:19:07.801405 4932 scope.go:117] "RemoveContainer" containerID="f192b740d7f25bfce99b428d3272fcae81ec637ea301812b7017ffb5a463bd87" Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.323031 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"3e2e4367df9f643fc535386ba9e271844b4bb725d0034fd147afe3b1559d9f5f"} Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.323757 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"a43890342dbc89184ec76d2cbd47cd7922226a5de33acd0dade0c7b26b6f01cf"} Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.327060 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc94d857-qzbl4" event={"ID":"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da","Type":"ContainerStarted","Data":"3e2b9cd771e5caba7f89839233dbdcbc1d9554d932a1f6a18b1d5fcfc6cf5be3"} Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.356084 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bfc8fff89-rv95c" event={"ID":"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a","Type":"ContainerStarted","Data":"4b0ecc8b5074317b7cb53990aa1aa7206b8c7ba277b907a9656ef1ce0b4099ab"} Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.358091 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerStarted","Data":"8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552"} Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.366796 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c74d85757-kjlwf" event={"ID":"e8adb280-44b6-4fb9-b358-aa75af003a44","Type":"ContainerStarted","Data":"5f7c09f5d03374096bf0a5b6ce90263e7e3989080c74f976cb78e4e07d77f7e5"} Mar 21 09:19:08 crc kubenswrapper[4932]: E0321 09:19:08.395669 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.159:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-vz4hd" podUID="37076824-e8b6-4b75-aea4-f463d7e50613" Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.578564 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.591614 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:19:08 crc kubenswrapper[4932]: W0321 09:19:08.596073 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c94c46_fec1_499c_8ae2_aab0899f87df.slice/crio-f98deedde911b44e86d4286291f865d76e54f9a2f6dc27f7561590016087abfd WatchSource:0}: Error finding container f98deedde911b44e86d4286291f865d76e54f9a2f6dc27f7561590016087abfd: Status 404 returned error can't find the container with id f98deedde911b44e86d4286291f865d76e54f9a2f6dc27f7561590016087abfd Mar 21 09:19:08 crc kubenswrapper[4932]: W0321 09:19:08.596524 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2137f88_2dc2_4718_bd8d_229745974b9a.slice/crio-5475aebecb7ca83b94a3fc3cb1f57f230e3c7176eac17a9d0af903a3205f012c WatchSource:0}: Error finding container 5475aebecb7ca83b94a3fc3cb1f57f230e3c7176eac17a9d0af903a3205f012c: Status 404 returned error can't find the container with id 5475aebecb7ca83b94a3fc3cb1f57f230e3c7176eac17a9d0af903a3205f012c Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.613714 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7998c44c8d-kb65g"] Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.625858 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cmdkb"] Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.635720 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:19:08 crc kubenswrapper[4932]: W0321 09:19:08.638868 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba11bdb_a9d2_414d_b2df_3eaedd97df7e.slice/crio-42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b WatchSource:0}: Error finding container 42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b: Status 404 returned error can't find the container with id 42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b Mar 21 09:19:08 crc kubenswrapper[4932]: W0321 09:19:08.665128 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19a74ebb_1fc7_4e31_82dc_cb839cceeffb.slice/crio-5aa1ddc866fe0a5464d3e3665373245af140c05e6b683795ac016055b4b562f3 WatchSource:0}: Error finding container 5aa1ddc866fe0a5464d3e3665373245af140c05e6b683795ac016055b4b562f3: Status 404 returned error can't find the container with id 5aa1ddc866fe0a5464d3e3665373245af140c05e6b683795ac016055b4b562f3 Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.861720 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:19:08 crc kubenswrapper[4932]: I0321 09:19:08.899417 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c6bbf886c-qvkdt" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: i/o timeout" Mar 21 09:19:08 crc kubenswrapper[4932]: W0321 09:19:08.901486 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a277844_3590_4db2_af83_026af5697238.slice/crio-66179f9dfe0fd3f21cf39ff2e37a519efdd610f5966e1823462bd5983219de6f WatchSource:0}: Error finding container 66179f9dfe0fd3f21cf39ff2e37a519efdd610f5966e1823462bd5983219de6f: Status 404 returned error can't find the container with id 66179f9dfe0fd3f21cf39ff2e37a519efdd610f5966e1823462bd5983219de6f Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.398135 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"95c94c46-fec1-499c-8ae2-aab0899f87df","Type":"ContainerStarted","Data":"f98deedde911b44e86d4286291f865d76e54f9a2f6dc27f7561590016087abfd"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.404686 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc94d857-qzbl4" event={"ID":"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da","Type":"ContainerStarted","Data":"ca3a0ca9da247fcaa5a0b427775d9990ca1a076bc2e692a300a516361e17118e"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.404893 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dc94d857-qzbl4" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerName="horizon-log" containerID="cri-o://3e2b9cd771e5caba7f89839233dbdcbc1d9554d932a1f6a18b1d5fcfc6cf5be3" gracePeriod=30 Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.405490 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dc94d857-qzbl4" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerName="horizon" containerID="cri-o://ca3a0ca9da247fcaa5a0b427775d9990ca1a076bc2e692a300a516361e17118e" gracePeriod=30 Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.412095 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a277844-3590-4db2-af83-026af5697238","Type":"ContainerStarted","Data":"66179f9dfe0fd3f21cf39ff2e37a519efdd610f5966e1823462bd5983219de6f"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.418990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19a74ebb-1fc7-4e31-82dc-cb839cceeffb","Type":"ContainerStarted","Data":"3cf8ebc6cab85fca5cda4ba26728bd0232e9fd48ac7e7429c5b981128d9081ab"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.419038 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19a74ebb-1fc7-4e31-82dc-cb839cceeffb","Type":"ContainerStarted","Data":"5aa1ddc866fe0a5464d3e3665373245af140c05e6b683795ac016055b4b562f3"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.422618 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"8ae605f15175ce9e71e372bc36934f9dc8faf55475a4f3ece55f3805206d2838"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.422655 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"5475aebecb7ca83b94a3fc3cb1f57f230e3c7176eac17a9d0af903a3205f012c"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.435340 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"76843e78ee1aed96ecca14b093534f9b5aad60e2922cc9c5d20277979884fa14"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.446023 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7dc94d857-qzbl4" podStartSLOduration=3.4051159650000002 podStartE2EDuration="31.44600445s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="2026-03-21 09:18:39.782228944 +0000 UTC m=+1223.377427213" lastFinishedPulling="2026-03-21 09:19:07.823117419 +0000 UTC m=+1251.418315698" observedRunningTime="2026-03-21 09:19:09.423657641 +0000 UTC m=+1253.018855910" watchObservedRunningTime="2026-03-21 09:19:09.44600445 +0000 UTC m=+1253.041202719" Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.447874 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerStarted","Data":"50cbe2908b481732241e69d36c2de581f4209b27430c962ad06a77e862d6199f"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.479875 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bfc8fff89-rv95c" event={"ID":"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a","Type":"ContainerStarted","Data":"6227dbe5e60e32f4f6a2f54f08549059b5b6bbce749531e48d9e7d352ee47a40"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.480043 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bfc8fff89-rv95c" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerName="horizon-log" containerID="cri-o://4b0ecc8b5074317b7cb53990aa1aa7206b8c7ba277b907a9656ef1ce0b4099ab" gracePeriod=30 Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.480564 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bfc8fff89-rv95c" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerName="horizon" containerID="cri-o://6227dbe5e60e32f4f6a2f54f08549059b5b6bbce749531e48d9e7d352ee47a40" gracePeriod=30 Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.483701 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.499855 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cmdkb" event={"ID":"bba11bdb-a9d2-414d-b2df-3eaedd97df7e","Type":"ContainerStarted","Data":"8254a2d4464dd1924e3f06796cf51cdc9d33af308e51c06413170350cb08e4b8"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.499905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cmdkb" event={"ID":"bba11bdb-a9d2-414d-b2df-3eaedd97df7e","Type":"ContainerStarted","Data":"42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.508685 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-fbf6fd964-2w7xj" podStartSLOduration=22.50866425 podStartE2EDuration="22.50866425s" podCreationTimestamp="2026-03-21 09:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:09.459660221 +0000 UTC m=+1253.054858500" watchObservedRunningTime="2026-03-21 09:19:09.50866425 +0000 UTC m=+1253.103862519" Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.510306 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c74d85757-kjlwf" event={"ID":"e8adb280-44b6-4fb9-b358-aa75af003a44","Type":"ContainerStarted","Data":"796c29fb749f6c917d0af3192300a74bb9e33e2a8c17b2fa1475434d5224ae0d"} Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.510604 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c74d85757-kjlwf" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerName="horizon-log" containerID="cri-o://5f7c09f5d03374096bf0a5b6ce90263e7e3989080c74f976cb78e4e07d77f7e5" gracePeriod=30 Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.511299 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c74d85757-kjlwf" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerName="horizon" containerID="cri-o://796c29fb749f6c917d0af3192300a74bb9e33e2a8c17b2fa1475434d5224ae0d" gracePeriod=30 Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.533977 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bfc8fff89-rv95c" podStartSLOduration=4.691294174 podStartE2EDuration="31.53395824s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="2026-03-21 09:18:40.967090131 +0000 UTC m=+1224.562288400" lastFinishedPulling="2026-03-21 09:19:07.809754197 +0000 UTC m=+1251.404952466" observedRunningTime="2026-03-21 09:19:09.499679134 +0000 UTC m=+1253.094877403" watchObservedRunningTime="2026-03-21 09:19:09.53395824 +0000 UTC m=+1253.129156509" Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.567363 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cmdkb" podStartSLOduration=11.567330568 podStartE2EDuration="11.567330568s" podCreationTimestamp="2026-03-21 09:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:09.515312486 +0000 UTC m=+1253.110510755" watchObservedRunningTime="2026-03-21 09:19:09.567330568 +0000 UTC m=+1253.162528837" Mar 21 09:19:09 crc kubenswrapper[4932]: I0321 09:19:09.569688 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c74d85757-kjlwf" podStartSLOduration=5.145058091 podStartE2EDuration="29.569679481s" podCreationTimestamp="2026-03-21 09:18:40 +0000 UTC" firstStartedPulling="2026-03-21 09:18:42.195686274 +0000 UTC m=+1225.790884543" lastFinishedPulling="2026-03-21 09:19:06.620307664 +0000 UTC m=+1250.215505933" observedRunningTime="2026-03-21 09:19:09.539010866 +0000 UTC m=+1253.134209125" watchObservedRunningTime="2026-03-21 09:19:09.569679481 +0000 UTC m=+1253.164877750" Mar 21 09:19:10 crc kubenswrapper[4932]: I0321 09:19:10.521059 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb2449ef-1380-4083-87cd-242b41f821ac","Type":"ContainerStarted","Data":"896b47630cde264a63ed20d2bf00f00d6bb93ae02fa2acad697d55ece8b4f751"} Mar 21 09:19:10 crc kubenswrapper[4932]: I0321 09:19:10.525068 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a277844-3590-4db2-af83-026af5697238","Type":"ContainerStarted","Data":"c98b72ac22bf47c15785ca8a84c929bbf86dc057b72ddce5fa68e63115a8655e"} Mar 21 09:19:11 crc kubenswrapper[4932]: I0321 09:19:11.506416 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:19:11 crc kubenswrapper[4932]: I0321 09:19:11.576167 4932 generic.go:334] "Generic (PLEG): container finished" podID="faee8175-5928-4824-be91-e0da3c01b71a" containerID="60a38082968445e049ea86f13a7416ccb1a55b1b20b546ceb19b713d45309991" exitCode=0 Mar 21 09:19:11 crc kubenswrapper[4932]: I0321 09:19:11.576212 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nbzvp" event={"ID":"faee8175-5928-4824-be91-e0da3c01b71a","Type":"ContainerDied","Data":"60a38082968445e049ea86f13a7416ccb1a55b1b20b546ceb19b713d45309991"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.630000 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerStarted","Data":"b4cbb14cc83169e2ef7f60038b8fe8862711567a470ea01d74466ea9ed03954d"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.643329 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerStarted","Data":"878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.652616 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h2dqh" event={"ID":"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b","Type":"ContainerStarted","Data":"5ed4c0bd6c557eaab65da1a605508b866d54185082a69d6081fd69916ca9d07b"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.661406 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=20.414877006 podStartE2EDuration="22.661316692s" podCreationTimestamp="2026-03-21 09:18:50 +0000 UTC" firstStartedPulling="2026-03-21 09:19:08.59612156 +0000 UTC m=+1252.191319829" lastFinishedPulling="2026-03-21 09:19:10.842561246 +0000 UTC m=+1254.437759515" observedRunningTime="2026-03-21 09:19:12.653750569 +0000 UTC m=+1256.248948838" watchObservedRunningTime="2026-03-21 09:19:12.661316692 +0000 UTC m=+1256.256514961" Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.668598 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"e34569c65ef63ac1abf4d70b584148f3284f2ae7d7ba5186f85a28c1f2539f80"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.684968 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-h2dqh" podStartSLOduration=6.7036523930000005 podStartE2EDuration="34.684947391s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="2026-03-21 09:18:40.93473205 +0000 UTC m=+1224.529930319" lastFinishedPulling="2026-03-21 09:19:08.916027048 +0000 UTC m=+1252.511225317" observedRunningTime="2026-03-21 09:19:12.677111089 +0000 UTC m=+1256.272309358" watchObservedRunningTime="2026-03-21 09:19:12.684947391 +0000 UTC m=+1256.280145660" Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.685662 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"95c94c46-fec1-499c-8ae2-aab0899f87df","Type":"ContainerStarted","Data":"38ce93587878f1ae6cf2bfccda3c2a41d84bba8bf5991fe74757209456906f55"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.696143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kcpjh" event={"ID":"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a","Type":"ContainerStarted","Data":"650ed92cd749fa114c156d9e7f75efe86217762afc53b9a2718a553a39d2ab9b"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.706609 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7998c44c8d-kb65g" podStartSLOduration=25.706591817 podStartE2EDuration="25.706591817s" podCreationTimestamp="2026-03-21 09:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:12.703015177 +0000 UTC m=+1256.298213446" watchObservedRunningTime="2026-03-21 09:19:12.706591817 +0000 UTC m=+1256.301790086" Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.719865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a277844-3590-4db2-af83-026af5697238","Type":"ContainerStarted","Data":"7d7ad18f2d1042866e797998584f25d42110333772132e59bf304edec5f5d997"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.731838 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kcpjh" podStartSLOduration=4.275441298 podStartE2EDuration="34.731815485s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="2026-03-21 09:18:40.379330537 +0000 UTC m=+1223.974528806" lastFinishedPulling="2026-03-21 09:19:10.835704724 +0000 UTC m=+1254.430902993" observedRunningTime="2026-03-21 09:19:12.719903357 +0000 UTC m=+1256.315101626" watchObservedRunningTime="2026-03-21 09:19:12.731815485 +0000 UTC m=+1256.327013754" Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.735290 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb2449ef-1380-4083-87cd-242b41f821ac","Type":"ContainerStarted","Data":"9a2a7548cbaaa4f1d1bee0519cf32e921da16f3b75a3f7500dd9499088c431d1"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.753669 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19a74ebb-1fc7-4e31-82dc-cb839cceeffb","Type":"ContainerStarted","Data":"34e7bd308a5a9842a6b0f827f60a61d65285ea20d016fe5e3845b49a56b6b00a"} Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.760115 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=20.560564146 podStartE2EDuration="22.760095686s" podCreationTimestamp="2026-03-21 09:18:50 +0000 UTC" firstStartedPulling="2026-03-21 09:19:08.60779606 +0000 UTC m=+1252.202994329" lastFinishedPulling="2026-03-21 09:19:10.8073276 +0000 UTC m=+1254.402525869" observedRunningTime="2026-03-21 09:19:12.740186983 +0000 UTC m=+1256.335385252" watchObservedRunningTime="2026-03-21 09:19:12.760095686 +0000 UTC m=+1256.355293955" Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.771013 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.770993282 podStartE2EDuration="14.770993282s" podCreationTimestamp="2026-03-21 09:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:12.7667054 +0000 UTC m=+1256.361903669" watchObservedRunningTime="2026-03-21 09:19:12.770993282 +0000 UTC m=+1256.366191551" Mar 21 09:19:12 crc kubenswrapper[4932]: I0321 09:19:12.794874 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=22.794853838 podStartE2EDuration="22.794853838s" podCreationTimestamp="2026-03-21 09:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:12.787584093 +0000 UTC m=+1256.382782392" watchObservedRunningTime="2026-03-21 09:19:12.794853838 +0000 UTC m=+1256.390052117" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.225948 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.289189 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-config\") pod \"faee8175-5928-4824-be91-e0da3c01b71a\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.289274 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6596\" (UniqueName: \"kubernetes.io/projected/faee8175-5928-4824-be91-e0da3c01b71a-kube-api-access-p6596\") pod \"faee8175-5928-4824-be91-e0da3c01b71a\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.289590 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-combined-ca-bundle\") pod \"faee8175-5928-4824-be91-e0da3c01b71a\" (UID: \"faee8175-5928-4824-be91-e0da3c01b71a\") " Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.319167 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faee8175-5928-4824-be91-e0da3c01b71a-kube-api-access-p6596" (OuterVolumeSpecName: "kube-api-access-p6596") pod "faee8175-5928-4824-be91-e0da3c01b71a" (UID: "faee8175-5928-4824-be91-e0da3c01b71a"). InnerVolumeSpecName "kube-api-access-p6596". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.326143 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faee8175-5928-4824-be91-e0da3c01b71a" (UID: "faee8175-5928-4824-be91-e0da3c01b71a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.330867 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-config" (OuterVolumeSpecName: "config") pod "faee8175-5928-4824-be91-e0da3c01b71a" (UID: "faee8175-5928-4824-be91-e0da3c01b71a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.392482 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.392822 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/faee8175-5928-4824-be91-e0da3c01b71a-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.392837 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6596\" (UniqueName: \"kubernetes.io/projected/faee8175-5928-4824-be91-e0da3c01b71a-kube-api-access-p6596\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.799165 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb2449ef-1380-4083-87cd-242b41f821ac","Type":"ContainerStarted","Data":"dd30c5a284e1aa303b8e49a9d345e8a2d4fc7a4567be38e3a9fb6d103bb8e43c"} Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.822621 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nbzvp" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.823033 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nbzvp" event={"ID":"faee8175-5928-4824-be91-e0da3c01b71a","Type":"ContainerDied","Data":"818221d5b1f1fd574ced2870fab054b07974ae75ffb4ccbc223c1beef4ebee45"} Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.823055 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818221d5b1f1fd574ced2870fab054b07974ae75ffb4ccbc223c1beef4ebee45" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.824514 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.838161 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fdc97999-qqf4w"] Mar 21 09:19:13 crc kubenswrapper[4932]: E0321 09:19:13.838627 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faee8175-5928-4824-be91-e0da3c01b71a" containerName="neutron-db-sync" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.838640 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="faee8175-5928-4824-be91-e0da3c01b71a" containerName="neutron-db-sync" Mar 21 09:19:13 crc kubenswrapper[4932]: E0321 09:19:13.838663 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="init" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.838670 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="init" Mar 21 09:19:13 crc kubenswrapper[4932]: E0321 09:19:13.838688 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="dnsmasq-dns" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.838694 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="dnsmasq-dns" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.838887 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="faee8175-5928-4824-be91-e0da3c01b71a" containerName="neutron-db-sync" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.838907 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="699d7b1e-6190-4000-8035-0a2c288a53f7" containerName="dnsmasq-dns" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.840012 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.864509 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fdc97999-qqf4w"] Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.894927 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.894907217 podStartE2EDuration="15.894907217s" podCreationTimestamp="2026-03-21 09:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:13.8454038 +0000 UTC m=+1257.440602069" watchObservedRunningTime="2026-03-21 09:19:13.894907217 +0000 UTC m=+1257.490105486" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.905573 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.905610 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbx7r\" (UniqueName: \"kubernetes.io/projected/800026eb-fe3e-4b45-b809-3ec63f7143c9-kube-api-access-xbx7r\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.905658 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-config\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.905749 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.905838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-swift-storage-0\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:13 crc kubenswrapper[4932]: I0321 09:19:13.905880 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-svc\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.002012 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c449d5454-gtqfd"] Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.004046 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.013111 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-857tb" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.013423 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.013587 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.013708 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.014738 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-svc\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.014893 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.014916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbx7r\" (UniqueName: \"kubernetes.io/projected/800026eb-fe3e-4b45-b809-3ec63f7143c9-kube-api-access-xbx7r\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.014961 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-config\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.014997 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.015080 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-swift-storage-0\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.015881 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.015966 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-swift-storage-0\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.018292 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.018884 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-config\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.019435 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-svc\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.029563 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c449d5454-gtqfd"] Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.055153 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbx7r\" (UniqueName: \"kubernetes.io/projected/800026eb-fe3e-4b45-b809-3ec63f7143c9-kube-api-access-xbx7r\") pod \"dnsmasq-dns-6fdc97999-qqf4w\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.117563 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-config\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.117734 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-ovndb-tls-certs\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.117765 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-httpd-config\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.117794 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twqgh\" (UniqueName: \"kubernetes.io/projected/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-kube-api-access-twqgh\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.117818 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-combined-ca-bundle\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.209497 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.219756 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-config\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.220102 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-ovndb-tls-certs\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.220202 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-httpd-config\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.220309 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twqgh\" (UniqueName: \"kubernetes.io/projected/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-kube-api-access-twqgh\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.220445 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-combined-ca-bundle\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.230950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-config\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.232017 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-combined-ca-bundle\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.233996 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-ovndb-tls-certs\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.235972 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-httpd-config\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.257061 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twqgh\" (UniqueName: \"kubernetes.io/projected/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-kube-api-access-twqgh\") pod \"neutron-c449d5454-gtqfd\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.350114 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:14 crc kubenswrapper[4932]: I0321 09:19:14.989211 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fdc97999-qqf4w"] Mar 21 09:19:15 crc kubenswrapper[4932]: I0321 09:19:15.331299 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c449d5454-gtqfd"] Mar 21 09:19:15 crc kubenswrapper[4932]: I0321 09:19:15.894724 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c449d5454-gtqfd" event={"ID":"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0","Type":"ContainerStarted","Data":"469b9b376b2ac8f658d73cfb4052b30fc41ddcfdd620a59f423d1d1a3ee68acb"} Mar 21 09:19:15 crc kubenswrapper[4932]: I0321 09:19:15.895247 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c449d5454-gtqfd" event={"ID":"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0","Type":"ContainerStarted","Data":"88e7a98395c36279da40b56c99d75035718a74445c083fb009d101cb26c6185e"} Mar 21 09:19:15 crc kubenswrapper[4932]: I0321 09:19:15.899361 4932 generic.go:334] "Generic (PLEG): container finished" podID="800026eb-fe3e-4b45-b809-3ec63f7143c9" containerID="bb4b2734bd8629ef3d087f04c25b570e686de588a1b3dee6ed36352cfa292f9e" exitCode=0 Mar 21 09:19:15 crc kubenswrapper[4932]: I0321 09:19:15.899413 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" event={"ID":"800026eb-fe3e-4b45-b809-3ec63f7143c9","Type":"ContainerDied","Data":"bb4b2734bd8629ef3d087f04c25b570e686de588a1b3dee6ed36352cfa292f9e"} Mar 21 09:19:15 crc kubenswrapper[4932]: I0321 09:19:15.899447 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" event={"ID":"800026eb-fe3e-4b45-b809-3ec63f7143c9","Type":"ContainerStarted","Data":"9ff4da7264561b4cc747d82cb024b51756738f1362f92b2c1249892d40fca366"} Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.016327 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.038278 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.038394 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.639134 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86b8b664df-8nqhr"] Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.641568 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.655957 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.656112 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86b8b664df-8nqhr"] Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.656201 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.725273 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-combined-ca-bundle\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.725383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-internal-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.725414 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2849\" (UniqueName: \"kubernetes.io/projected/76427553-0e6c-4a84-820e-34fcfe6732a4-kube-api-access-l2849\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.725441 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-ovndb-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.725471 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-public-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.725665 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-config\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.725686 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-httpd-config\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.827116 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-config\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.827179 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-httpd-config\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.827234 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-combined-ca-bundle\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.827322 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-internal-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.827371 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2849\" (UniqueName: \"kubernetes.io/projected/76427553-0e6c-4a84-820e-34fcfe6732a4-kube-api-access-l2849\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.827410 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-ovndb-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.827441 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-public-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.833518 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-httpd-config\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.834205 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-public-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.850056 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-internal-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.854072 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2849\" (UniqueName: \"kubernetes.io/projected/76427553-0e6c-4a84-820e-34fcfe6732a4-kube-api-access-l2849\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.855479 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-config\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.858067 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-combined-ca-bundle\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.861961 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-ovndb-tls-certs\") pod \"neutron-86b8b664df-8nqhr\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.928687 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" event={"ID":"800026eb-fe3e-4b45-b809-3ec63f7143c9","Type":"ContainerStarted","Data":"3dd33b9a361075e6cabd684aa076297b4bf922e7ccc60277f6c9f8ca39445834"} Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.930206 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.947456 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c449d5454-gtqfd" event={"ID":"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0","Type":"ContainerStarted","Data":"46e1512eedc05e0dd2a4976c9a3689a30cf02368d23f5a4c0456d083c2aec769"} Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.948131 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:16 crc kubenswrapper[4932]: I0321 09:19:16.981278 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.003144 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" podStartSLOduration=4.003121267 podStartE2EDuration="4.003121267s" podCreationTimestamp="2026-03-21 09:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:16.973119663 +0000 UTC m=+1260.568317932" watchObservedRunningTime="2026-03-21 09:19:17.003121267 +0000 UTC m=+1260.598319546" Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.007749 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c449d5454-gtqfd" podStartSLOduration=4.00772991 podStartE2EDuration="4.00772991s" podCreationTimestamp="2026-03-21 09:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:16.999029292 +0000 UTC m=+1260.594227571" watchObservedRunningTime="2026-03-21 09:19:17.00772991 +0000 UTC m=+1260.602928189" Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.740531 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.741147 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.874423 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86b8b664df-8nqhr"] Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.948004 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.949241 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.969548 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b8b664df-8nqhr" event={"ID":"76427553-0e6c-4a84-820e-34fcfe6732a4","Type":"ContainerStarted","Data":"ee669e59dfbc017f81e9e47583ea253261911edf9cca3572a58609e18ce9a8c8"} Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.982291 4932 generic.go:334] "Generic (PLEG): container finished" podID="bba11bdb-a9d2-414d-b2df-3eaedd97df7e" containerID="8254a2d4464dd1924e3f06796cf51cdc9d33af308e51c06413170350cb08e4b8" exitCode=0 Mar 21 09:19:17 crc kubenswrapper[4932]: I0321 09:19:17.983560 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cmdkb" event={"ID":"bba11bdb-a9d2-414d-b2df-3eaedd97df7e","Type":"ContainerDied","Data":"8254a2d4464dd1924e3f06796cf51cdc9d33af308e51c06413170350cb08e4b8"} Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.829285 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.864602 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.899748 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.899829 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.915632 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.915702 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.967919 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.977071 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 21 09:19:18 crc kubenswrapper[4932]: I0321 09:19:18.978442 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.005757 4932 generic.go:334] "Generic (PLEG): container finished" podID="d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" containerID="5ed4c0bd6c557eaab65da1a605508b866d54185082a69d6081fd69916ca9d07b" exitCode=0 Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.005833 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h2dqh" event={"ID":"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b","Type":"ContainerDied","Data":"5ed4c0bd6c557eaab65da1a605508b866d54185082a69d6081fd69916ca9d07b"} Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.009843 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b8b664df-8nqhr" event={"ID":"76427553-0e6c-4a84-820e-34fcfe6732a4","Type":"ContainerStarted","Data":"fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926"} Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.009880 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b8b664df-8nqhr" event={"ID":"76427553-0e6c-4a84-820e-34fcfe6732a4","Type":"ContainerStarted","Data":"13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4"} Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.009894 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.011195 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.011692 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.011730 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.046467 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.165741 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86b8b664df-8nqhr" podStartSLOduration=3.16571976 podStartE2EDuration="3.16571976s" podCreationTimestamp="2026-03-21 09:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:19.117840195 +0000 UTC m=+1262.713038464" watchObservedRunningTime="2026-03-21 09:19:19.16571976 +0000 UTC m=+1262.760918029" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.414570 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.600102 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.643712 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-combined-ca-bundle\") pod \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.643885 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-credential-keys\") pod \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.643928 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-scripts\") pod \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.643994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-config-data\") pod \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.644069 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-fernet-keys\") pod \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.644127 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c2p4\" (UniqueName: \"kubernetes.io/projected/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-kube-api-access-6c2p4\") pod \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\" (UID: \"bba11bdb-a9d2-414d-b2df-3eaedd97df7e\") " Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.658309 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-kube-api-access-6c2p4" (OuterVolumeSpecName: "kube-api-access-6c2p4") pod "bba11bdb-a9d2-414d-b2df-3eaedd97df7e" (UID: "bba11bdb-a9d2-414d-b2df-3eaedd97df7e"). InnerVolumeSpecName "kube-api-access-6c2p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.659450 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bba11bdb-a9d2-414d-b2df-3eaedd97df7e" (UID: "bba11bdb-a9d2-414d-b2df-3eaedd97df7e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.663462 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bba11bdb-a9d2-414d-b2df-3eaedd97df7e" (UID: "bba11bdb-a9d2-414d-b2df-3eaedd97df7e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.683550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-scripts" (OuterVolumeSpecName: "scripts") pod "bba11bdb-a9d2-414d-b2df-3eaedd97df7e" (UID: "bba11bdb-a9d2-414d-b2df-3eaedd97df7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.751071 4932 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.751116 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.751129 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.751161 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c2p4\" (UniqueName: \"kubernetes.io/projected/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-kube-api-access-6c2p4\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.757749 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bba11bdb-a9d2-414d-b2df-3eaedd97df7e" (UID: "bba11bdb-a9d2-414d-b2df-3eaedd97df7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.764711 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-config-data" (OuterVolumeSpecName: "config-data") pod "bba11bdb-a9d2-414d-b2df-3eaedd97df7e" (UID: "bba11bdb-a9d2-414d-b2df-3eaedd97df7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.853831 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:19 crc kubenswrapper[4932]: I0321 09:19:19.853865 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba11bdb-a9d2-414d-b2df-3eaedd97df7e-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.028788 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cmdkb" event={"ID":"bba11bdb-a9d2-414d-b2df-3eaedd97df7e","Type":"ContainerDied","Data":"42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b"} Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.028831 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.028896 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cmdkb" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.037932 4932 generic.go:334] "Generic (PLEG): container finished" podID="512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" containerID="650ed92cd749fa114c156d9e7f75efe86217762afc53b9a2718a553a39d2ab9b" exitCode=0 Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.038029 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kcpjh" event={"ID":"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a","Type":"ContainerDied","Data":"650ed92cd749fa114c156d9e7f75efe86217762afc53b9a2718a553a39d2ab9b"} Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.061678 4932 generic.go:334] "Generic (PLEG): container finished" podID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerID="b4cbb14cc83169e2ef7f60038b8fe8862711567a470ea01d74466ea9ed03954d" exitCode=1 Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.061888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerDied","Data":"b4cbb14cc83169e2ef7f60038b8fe8862711567a470ea01d74466ea9ed03954d"} Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.062742 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.063103 4932 scope.go:117] "RemoveContainer" containerID="b4cbb14cc83169e2ef7f60038b8fe8862711567a470ea01d74466ea9ed03954d" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.175574 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-658f888668-v6842"] Mar 21 09:19:20 crc kubenswrapper[4932]: E0321 09:19:20.176582 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba11bdb-a9d2-414d-b2df-3eaedd97df7e" containerName="keystone-bootstrap" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.176623 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba11bdb-a9d2-414d-b2df-3eaedd97df7e" containerName="keystone-bootstrap" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.176962 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba11bdb-a9d2-414d-b2df-3eaedd97df7e" containerName="keystone-bootstrap" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.178107 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.180704 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.184792 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.184992 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fkdzk" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.185159 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.185324 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.190713 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.217430 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-658f888668-v6842"] Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.375461 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-config-data\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.375519 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-credential-keys\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.375559 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-scripts\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.375577 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-internal-tls-certs\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.375600 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-combined-ca-bundle\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.375656 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-762t4\" (UniqueName: \"kubernetes.io/projected/50adb689-8024-4fac-a9d0-8133a18de438-kube-api-access-762t4\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.375692 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-fernet-keys\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.375716 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-public-tls-certs\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.476975 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-config-data\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.477033 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-credential-keys\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.477074 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-scripts\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.477093 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-internal-tls-certs\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.477116 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-combined-ca-bundle\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.477172 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-762t4\" (UniqueName: \"kubernetes.io/projected/50adb689-8024-4fac-a9d0-8133a18de438-kube-api-access-762t4\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.477208 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-fernet-keys\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.477233 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-public-tls-certs\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.485916 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-public-tls-certs\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.487085 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-config-data\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.489183 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-combined-ca-bundle\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.491825 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-internal-tls-certs\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.492113 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-scripts\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.494920 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-fernet-keys\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.496569 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/50adb689-8024-4fac-a9d0-8133a18de438-credential-keys\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.517996 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-762t4\" (UniqueName: \"kubernetes.io/projected/50adb689-8024-4fac-a9d0-8133a18de438-kube-api-access-762t4\") pod \"keystone-658f888668-v6842\" (UID: \"50adb689-8024-4fac-a9d0-8133a18de438\") " pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.530798 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.859250 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.859625 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.859640 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:20 crc kubenswrapper[4932]: I0321 09:19:20.859654 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.015271 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.038850 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.077505 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.077527 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.077610 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.080610 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.093689 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.166383 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.322891 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 21 09:19:21 crc kubenswrapper[4932]: I0321 09:19:21.328990 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 21 09:19:22 crc kubenswrapper[4932]: I0321 09:19:22.096315 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 21 09:19:23 crc kubenswrapper[4932]: I0321 09:19:23.115592 4932 generic.go:334] "Generic (PLEG): container finished" podID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerID="6227dbe5e60e32f4f6a2f54f08549059b5b6bbce749531e48d9e7d352ee47a40" exitCode=1 Mar 21 09:19:23 crc kubenswrapper[4932]: I0321 09:19:23.116320 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bfc8fff89-rv95c" event={"ID":"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a","Type":"ContainerDied","Data":"6227dbe5e60e32f4f6a2f54f08549059b5b6bbce749531e48d9e7d352ee47a40"} Mar 21 09:19:23 crc kubenswrapper[4932]: I0321 09:19:23.347421 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 21 09:19:23 crc kubenswrapper[4932]: I0321 09:19:23.347515 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:19:23 crc kubenswrapper[4932]: I0321 09:19:23.350695 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 21 09:19:23 crc kubenswrapper[4932]: I0321 09:19:23.373079 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 21 09:19:23 crc kubenswrapper[4932]: I0321 09:19:23.373150 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.136048 4932 generic.go:334] "Generic (PLEG): container finished" podID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerID="ca3a0ca9da247fcaa5a0b427775d9990ca1a076bc2e692a300a516361e17118e" exitCode=1 Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.136131 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc94d857-qzbl4" event={"ID":"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da","Type":"ContainerDied","Data":"ca3a0ca9da247fcaa5a0b427775d9990ca1a076bc2e692a300a516361e17118e"} Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.140258 4932 generic.go:334] "Generic (PLEG): container finished" podID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerID="796c29fb749f6c917d0af3192300a74bb9e33e2a8c17b2fa1475434d5224ae0d" exitCode=1 Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.140410 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c74d85757-kjlwf" event={"ID":"e8adb280-44b6-4fb9-b358-aa75af003a44","Type":"ContainerDied","Data":"796c29fb749f6c917d0af3192300a74bb9e33e2a8c17b2fa1475434d5224ae0d"} Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.142604 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="76843e78ee1aed96ecca14b093534f9b5aad60e2922cc9c5d20277979884fa14" exitCode=1 Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.142692 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"76843e78ee1aed96ecca14b093534f9b5aad60e2922cc9c5d20277979884fa14"} Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.143608 4932 scope.go:117] "RemoveContainer" containerID="76843e78ee1aed96ecca14b093534f9b5aad60e2922cc9c5d20277979884fa14" Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.211665 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.280293 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cbfddf68f-8mwxb"] Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.280697 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" podUID="829e124a-b451-4e8c-9d46-594c51a71418" containerName="dnsmasq-dns" containerID="cri-o://6490c90c4c95cc0baaa183d6485faed9922b5af04aede6033b4a83168b162411" gracePeriod=10 Mar 21 09:19:24 crc kubenswrapper[4932]: I0321 09:19:24.583328 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" podUID="829e124a-b451-4e8c-9d46-594c51a71418" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: connect: connection refused" Mar 21 09:19:25 crc kubenswrapper[4932]: I0321 09:19:25.170804 4932 generic.go:334] "Generic (PLEG): container finished" podID="829e124a-b451-4e8c-9d46-594c51a71418" containerID="6490c90c4c95cc0baaa183d6485faed9922b5af04aede6033b4a83168b162411" exitCode=0 Mar 21 09:19:25 crc kubenswrapper[4932]: I0321 09:19:25.170859 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" event={"ID":"829e124a-b451-4e8c-9d46-594c51a71418","Type":"ContainerDied","Data":"6490c90c4c95cc0baaa183d6485faed9922b5af04aede6033b4a83168b162411"} Mar 21 09:19:25 crc kubenswrapper[4932]: I0321 09:19:25.635732 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:19:25 crc kubenswrapper[4932]: I0321 09:19:25.636273 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api-log" containerID="cri-o://3cf8ebc6cab85fca5cda4ba26728bd0232e9fd48ac7e7429c5b981128d9081ab" gracePeriod=30 Mar 21 09:19:25 crc kubenswrapper[4932]: I0321 09:19:25.636323 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api" containerID="cri-o://34e7bd308a5a9842a6b0f827f60a61d65285ea20d016fe5e3845b49a56b6b00a" gracePeriod=30 Mar 21 09:19:26 crc kubenswrapper[4932]: I0321 09:19:26.185681 4932 generic.go:334] "Generic (PLEG): container finished" podID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerID="3cf8ebc6cab85fca5cda4ba26728bd0232e9fd48ac7e7429c5b981128d9081ab" exitCode=143 Mar 21 09:19:26 crc kubenswrapper[4932]: I0321 09:19:26.185724 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19a74ebb-1fc7-4e31-82dc-cb839cceeffb","Type":"ContainerDied","Data":"3cf8ebc6cab85fca5cda4ba26728bd0232e9fd48ac7e7429c5b981128d9081ab"} Mar 21 09:19:26 crc kubenswrapper[4932]: I0321 09:19:26.870360 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:45216->10.217.0.169:9322: read: connection reset by peer" Mar 21 09:19:26 crc kubenswrapper[4932]: I0321 09:19:26.870380 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:45228->10.217.0.169:9322: read: connection reset by peer" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.198266 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="e34569c65ef63ac1abf4d70b584148f3284f2ae7d7ba5186f85a28c1f2539f80" exitCode=1 Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.198343 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"e34569c65ef63ac1abf4d70b584148f3284f2ae7d7ba5186f85a28c1f2539f80"} Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.199077 4932 scope.go:117] "RemoveContainer" containerID="e34569c65ef63ac1abf4d70b584148f3284f2ae7d7ba5186f85a28c1f2539f80" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.202479 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kcpjh" event={"ID":"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a","Type":"ContainerDied","Data":"d0172cca35ebf6643837d90d464b27054ed06f389fb143cff0b54d0ce13b9afc"} Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.202516 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0172cca35ebf6643837d90d464b27054ed06f389fb143cff0b54d0ce13b9afc" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.204872 4932 generic.go:334] "Generic (PLEG): container finished" podID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerID="34e7bd308a5a9842a6b0f827f60a61d65285ea20d016fe5e3845b49a56b6b00a" exitCode=0 Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.204928 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19a74ebb-1fc7-4e31-82dc-cb839cceeffb","Type":"ContainerDied","Data":"34e7bd308a5a9842a6b0f827f60a61d65285ea20d016fe5e3845b49a56b6b00a"} Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.207567 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-h2dqh" event={"ID":"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b","Type":"ContainerDied","Data":"ca4d2e591fc8561d6f6ea646c5fa65cb873a9122e2725f8a05612ee8daf95b8b"} Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.207609 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca4d2e591fc8561d6f6ea646c5fa65cb873a9122e2725f8a05612ee8daf95b8b" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.311060 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.330270 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h2dqh" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.345889 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5hpm\" (UniqueName: \"kubernetes.io/projected/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-kube-api-access-z5hpm\") pod \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.346089 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-db-sync-config-data\") pod \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.346141 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-combined-ca-bundle\") pod \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\" (UID: \"512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.361537 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" (UID: "512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.366153 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-kube-api-access-z5hpm" (OuterVolumeSpecName: "kube-api-access-z5hpm") pod "512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" (UID: "512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a"). InnerVolumeSpecName "kube-api-access-z5hpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.416324 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" (UID: "512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.450008 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-combined-ca-bundle\") pod \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.450224 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-scripts\") pod \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.450243 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-config-data\") pod \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.450265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npzk6\" (UniqueName: \"kubernetes.io/projected/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-kube-api-access-npzk6\") pod \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.450320 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-logs\") pod \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\" (UID: \"d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.450699 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.450709 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.450718 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5hpm\" (UniqueName: \"kubernetes.io/projected/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a-kube-api-access-z5hpm\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.451317 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-logs" (OuterVolumeSpecName: "logs") pod "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" (UID: "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.465639 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-kube-api-access-npzk6" (OuterVolumeSpecName: "kube-api-access-npzk6") pod "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" (UID: "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b"). InnerVolumeSpecName "kube-api-access-npzk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.472589 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-scripts" (OuterVolumeSpecName: "scripts") pod "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" (UID: "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.519384 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" (UID: "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.520672 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-config-data" (OuterVolumeSpecName: "config-data") pod "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" (UID: "d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.552448 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.552492 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.552502 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npzk6\" (UniqueName: \"kubernetes.io/projected/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-kube-api-access-npzk6\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.552512 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.552521 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.588161 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.667684 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-svc\") pod \"829e124a-b451-4e8c-9d46-594c51a71418\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.667731 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-nb\") pod \"829e124a-b451-4e8c-9d46-594c51a71418\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.667949 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps8xf\" (UniqueName: \"kubernetes.io/projected/829e124a-b451-4e8c-9d46-594c51a71418-kube-api-access-ps8xf\") pod \"829e124a-b451-4e8c-9d46-594c51a71418\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.667998 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-sb\") pod \"829e124a-b451-4e8c-9d46-594c51a71418\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.668033 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-config\") pod \"829e124a-b451-4e8c-9d46-594c51a71418\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.668064 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-swift-storage-0\") pod \"829e124a-b451-4e8c-9d46-594c51a71418\" (UID: \"829e124a-b451-4e8c-9d46-594c51a71418\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.678721 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829e124a-b451-4e8c-9d46-594c51a71418-kube-api-access-ps8xf" (OuterVolumeSpecName: "kube-api-access-ps8xf") pod "829e124a-b451-4e8c-9d46-594c51a71418" (UID: "829e124a-b451-4e8c-9d46-594c51a71418"). InnerVolumeSpecName "kube-api-access-ps8xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.741255 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.741295 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.754998 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.770720 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps8xf\" (UniqueName: \"kubernetes.io/projected/829e124a-b451-4e8c-9d46-594c51a71418-kube-api-access-ps8xf\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.867153 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "829e124a-b451-4e8c-9d46-594c51a71418" (UID: "829e124a-b451-4e8c-9d46-594c51a71418"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.871866 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-custom-prometheus-ca\") pod \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.872042 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-logs\") pod \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.872161 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-config-data\") pod \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.872310 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-combined-ca-bundle\") pod \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.872500 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz5zf\" (UniqueName: \"kubernetes.io/projected/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-kube-api-access-cz5zf\") pod \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\" (UID: \"19a74ebb-1fc7-4e31-82dc-cb839cceeffb\") " Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.872943 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.874047 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-logs" (OuterVolumeSpecName: "logs") pod "19a74ebb-1fc7-4e31-82dc-cb839cceeffb" (UID: "19a74ebb-1fc7-4e31-82dc-cb839cceeffb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.895833 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-kube-api-access-cz5zf" (OuterVolumeSpecName: "kube-api-access-cz5zf") pod "19a74ebb-1fc7-4e31-82dc-cb839cceeffb" (UID: "19a74ebb-1fc7-4e31-82dc-cb839cceeffb"). InnerVolumeSpecName "kube-api-access-cz5zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.946055 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "829e124a-b451-4e8c-9d46-594c51a71418" (UID: "829e124a-b451-4e8c-9d46-594c51a71418"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.947639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.947738 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.955941 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-config" (OuterVolumeSpecName: "config") pod "829e124a-b451-4e8c-9d46-594c51a71418" (UID: "829e124a-b451-4e8c-9d46-594c51a71418"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.971542 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19a74ebb-1fc7-4e31-82dc-cb839cceeffb" (UID: "19a74ebb-1fc7-4e31-82dc-cb839cceeffb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.977306 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-658f888668-v6842"] Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.977737 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.977774 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.977787 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz5zf\" (UniqueName: \"kubernetes.io/projected/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-kube-api-access-cz5zf\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.977798 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.977812 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.988405 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "19a74ebb-1fc7-4e31-82dc-cb839cceeffb" (UID: "19a74ebb-1fc7-4e31-82dc-cb839cceeffb"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.991943 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "829e124a-b451-4e8c-9d46-594c51a71418" (UID: "829e124a-b451-4e8c-9d46-594c51a71418"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:27 crc kubenswrapper[4932]: I0321 09:19:27.995063 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "829e124a-b451-4e8c-9d46-594c51a71418" (UID: "829e124a-b451-4e8c-9d46-594c51a71418"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.013080 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-config-data" (OuterVolumeSpecName: "config-data") pod "19a74ebb-1fc7-4e31-82dc-cb839cceeffb" (UID: "19a74ebb-1fc7-4e31-82dc-cb839cceeffb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.083707 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.083754 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.083767 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19a74ebb-1fc7-4e31-82dc-cb839cceeffb-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.083781 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/829e124a-b451-4e8c-9d46-594c51a71418-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.218036 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58"} Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.220522 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-658f888668-v6842" event={"ID":"50adb689-8024-4fac-a9d0-8133a18de438","Type":"ContainerStarted","Data":"9874ecba8e3ee72d9ed8db534c40b96112df8fcd185cb25ac795adae1a3473cb"} Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.225899 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c"} Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.230952 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerStarted","Data":"827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e"} Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.241498 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerStarted","Data":"c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02"} Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.243492 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"19a74ebb-1fc7-4e31-82dc-cb839cceeffb","Type":"ContainerDied","Data":"5aa1ddc866fe0a5464d3e3665373245af140c05e6b683795ac016055b4b562f3"} Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.243530 4932 scope.go:117] "RemoveContainer" containerID="34e7bd308a5a9842a6b0f827f60a61d65285ea20d016fe5e3845b49a56b6b00a" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.243686 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.249478 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kcpjh" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.249482 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" event={"ID":"829e124a-b451-4e8c-9d46-594c51a71418","Type":"ContainerDied","Data":"7fb811bc183161bdcd2298e58dab138b3c3d03aedbc810eac42bb304cc029633"} Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.249674 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-h2dqh" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.249810 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbfddf68f-8mwxb" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.288501 4932 scope.go:117] "RemoveContainer" containerID="3cf8ebc6cab85fca5cda4ba26728bd0232e9fd48ac7e7429c5b981128d9081ab" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.325018 4932 scope.go:117] "RemoveContainer" containerID="6490c90c4c95cc0baaa183d6485faed9922b5af04aede6033b4a83168b162411" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.330235 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cbfddf68f-8mwxb"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.344427 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cbfddf68f-8mwxb"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.412698 4932 scope.go:117] "RemoveContainer" containerID="b712e45353e6a068788c30c74d1af34c5718a8200cfa2c1d7c74e904c142c92d" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.432842 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.446556 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469111 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:19:28 crc kubenswrapper[4932]: E0321 09:19:28.469615 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" containerName="placement-db-sync" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469629 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" containerName="placement-db-sync" Mar 21 09:19:28 crc kubenswrapper[4932]: E0321 09:19:28.469650 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api-log" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469656 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api-log" Mar 21 09:19:28 crc kubenswrapper[4932]: E0321 09:19:28.469668 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829e124a-b451-4e8c-9d46-594c51a71418" containerName="dnsmasq-dns" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469673 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="829e124a-b451-4e8c-9d46-594c51a71418" containerName="dnsmasq-dns" Mar 21 09:19:28 crc kubenswrapper[4932]: E0321 09:19:28.469688 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829e124a-b451-4e8c-9d46-594c51a71418" containerName="init" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469694 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="829e124a-b451-4e8c-9d46-594c51a71418" containerName="init" Mar 21 09:19:28 crc kubenswrapper[4932]: E0321 09:19:28.469706 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" containerName="barbican-db-sync" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469711 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" containerName="barbican-db-sync" Mar 21 09:19:28 crc kubenswrapper[4932]: E0321 09:19:28.469728 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469733 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469900 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" containerName="barbican-db-sync" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469913 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469923 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" containerName="watcher-api-log" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469933 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="829e124a-b451-4e8c-9d46-594c51a71418" containerName="dnsmasq-dns" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.469945 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" containerName="placement-db-sync" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.471006 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.473328 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.475288 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.475694 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.481047 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.493595 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-public-tls-certs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.496866 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45de1997-45e0-4e64-b3a3-4d9e9debfb39-logs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.497101 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.497246 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-config-data\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.497487 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.497818 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.498180 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rljn\" (UniqueName: \"kubernetes.io/projected/45de1997-45e0-4e64-b3a3-4d9e9debfb39-kube-api-access-4rljn\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.576628 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5bc4cc655d-wmdr9"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.578698 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.587973 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.588434 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.588591 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.588766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sxwdd" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.588983 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.600768 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-internal-tls-certs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.600821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-combined-ca-bundle\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.600894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-config-data\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.600930 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d312ece-9744-4dc2-be9d-1220beb02bb1-logs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.600965 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.600992 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8p9b\" (UniqueName: \"kubernetes.io/projected/8d312ece-9744-4dc2-be9d-1220beb02bb1-kube-api-access-k8p9b\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.601075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rljn\" (UniqueName: \"kubernetes.io/projected/45de1997-45e0-4e64-b3a3-4d9e9debfb39-kube-api-access-4rljn\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.601130 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-public-tls-certs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.601162 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45de1997-45e0-4e64-b3a3-4d9e9debfb39-logs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.601183 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.601213 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-scripts\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.601242 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-config-data\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.601264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-public-tls-certs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.601333 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.605929 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45de1997-45e0-4e64-b3a3-4d9e9debfb39-logs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.618264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.618336 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bc4cc655d-wmdr9"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.619658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.639536 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-config-data\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.643460 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-public-tls-certs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.643648 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rljn\" (UniqueName: \"kubernetes.io/projected/45de1997-45e0-4e64-b3a3-4d9e9debfb39-kube-api-access-4rljn\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.643791 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.644565 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-779f655bb5-55qpq"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.646274 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.657720 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8k82n" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.658103 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.658278 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.665126 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-779f655bb5-55qpq"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.710830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-config-data\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.710890 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d312ece-9744-4dc2-be9d-1220beb02bb1-logs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.710929 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8p9b\" (UniqueName: \"kubernetes.io/projected/8d312ece-9744-4dc2-be9d-1220beb02bb1-kube-api-access-k8p9b\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711032 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-config-data-custom\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711061 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-combined-ca-bundle\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711087 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-config-data\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711134 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-scripts\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711169 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-public-tls-certs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94k7\" (UniqueName: \"kubernetes.io/projected/9c85be66-46fe-4830-b918-25743e5a86d2-kube-api-access-b94k7\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711257 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-internal-tls-certs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711282 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-combined-ca-bundle\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.711314 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c85be66-46fe-4830-b918-25743e5a86d2-logs\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.712389 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d312ece-9744-4dc2-be9d-1220beb02bb1-logs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.721369 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-public-tls-certs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.725407 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-combined-ca-bundle\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.728594 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-scripts\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.729112 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-internal-tls-certs\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.746149 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d312ece-9744-4dc2-be9d-1220beb02bb1-config-data\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.749591 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b4f895846-xgmln"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.751776 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8p9b\" (UniqueName: \"kubernetes.io/projected/8d312ece-9744-4dc2-be9d-1220beb02bb1-kube-api-access-k8p9b\") pod \"placement-5bc4cc655d-wmdr9\" (UID: \"8d312ece-9744-4dc2-be9d-1220beb02bb1\") " pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.754035 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.760708 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.811172 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98588b9bc-lswzd"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.813133 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.826836 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-config-data-custom\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.826919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-logs\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.826935 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hrvn\" (UniqueName: \"kubernetes.io/projected/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-kube-api-access-2hrvn\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.826996 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-config-data-custom\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.827013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-combined-ca-bundle\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.827031 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-config-data\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.827058 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-combined-ca-bundle\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.827112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94k7\" (UniqueName: \"kubernetes.io/projected/9c85be66-46fe-4830-b918-25743e5a86d2-kube-api-access-b94k7\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.827131 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-config-data\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.827198 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c85be66-46fe-4830-b918-25743e5a86d2-logs\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.827626 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c85be66-46fe-4830-b918-25743e5a86d2-logs\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.833410 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-combined-ca-bundle\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.834222 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-config-data\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.841581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c85be66-46fe-4830-b918-25743e5a86d2-config-data-custom\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.855687 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.871993 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94k7\" (UniqueName: \"kubernetes.io/projected/9c85be66-46fe-4830-b918-25743e5a86d2-kube-api-access-b94k7\") pod \"barbican-worker-779f655bb5-55qpq\" (UID: \"9c85be66-46fe-4830-b918-25743e5a86d2\") " pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.903758 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b4f895846-xgmln"] Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928589 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-sb\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928649 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-combined-ca-bundle\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928705 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrnfw\" (UniqueName: \"kubernetes.io/projected/4181731c-7e65-491c-8bf2-7e7042ad14e3-kube-api-access-vrnfw\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928732 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-config-data\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928754 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-swift-storage-0\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-svc\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928801 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-nb\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928850 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-config\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928880 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-config-data-custom\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928952 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-logs\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.928976 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hrvn\" (UniqueName: \"kubernetes.io/projected/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-kube-api-access-2hrvn\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.939082 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-logs\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.941521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-config-data\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.944209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-config-data-custom\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.948151 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-combined-ca-bundle\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.957507 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hrvn\" (UniqueName: \"kubernetes.io/projected/6ac55ca5-9ef6-4157-a91e-49d312d5b2b8-kube-api-access-2hrvn\") pod \"barbican-keystone-listener-6b4f895846-xgmln\" (UID: \"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8\") " pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:28 crc kubenswrapper[4932]: I0321 09:19:28.961680 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98588b9bc-lswzd"] Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.034079 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.037479 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f975cdb74-g8tw2"] Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.040097 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.040919 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrnfw\" (UniqueName: \"kubernetes.io/projected/4181731c-7e65-491c-8bf2-7e7042ad14e3-kube-api-access-vrnfw\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.040963 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-swift-storage-0\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.040999 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-svc\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.041037 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-nb\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.041131 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-config\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.041437 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-sb\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.042381 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-swift-storage-0\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.042573 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-sb\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.042878 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-svc\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.043135 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-nb\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.043498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-config\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.049013 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.056730 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f975cdb74-g8tw2"] Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.063312 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-779f655bb5-55qpq" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.083117 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrnfw\" (UniqueName: \"kubernetes.io/projected/4181731c-7e65-491c-8bf2-7e7042ad14e3-kube-api-access-vrnfw\") pod \"dnsmasq-dns-98588b9bc-lswzd\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.102633 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.144532 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-logs\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.144602 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twtch\" (UniqueName: \"kubernetes.io/projected/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-kube-api-access-twtch\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.144632 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data-custom\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.144652 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-combined-ca-bundle\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.144712 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.164217 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.247136 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-combined-ca-bundle\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.247230 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.247307 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-logs\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.247369 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twtch\" (UniqueName: \"kubernetes.io/projected/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-kube-api-access-twtch\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.247397 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data-custom\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.254982 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-logs\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.256609 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.274000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data-custom\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.280118 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-combined-ca-bundle\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.287127 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twtch\" (UniqueName: \"kubernetes.io/projected/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-kube-api-access-twtch\") pod \"barbican-api-6f975cdb74-g8tw2\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.394460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-658f888668-v6842" event={"ID":"50adb689-8024-4fac-a9d0-8133a18de438","Type":"ContainerStarted","Data":"4ffb22855c5ce2021b24e206968b824b585b860e5bfd16e41b997d59be33edba"} Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.396438 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.411407 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vz4hd" event={"ID":"37076824-e8b6-4b75-aea4-f463d7e50613","Type":"ContainerStarted","Data":"a46ebf1a84e931d7638bdff05c62a8d8b7bbb3453a30a93dcd8914a74a182233"} Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.461047 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-658f888668-v6842" podStartSLOduration=9.461030297 podStartE2EDuration="9.461030297s" podCreationTimestamp="2026-03-21 09:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:29.424857973 +0000 UTC m=+1273.020056242" watchObservedRunningTime="2026-03-21 09:19:29.461030297 +0000 UTC m=+1273.056228566" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.526147 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.558436 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vz4hd" podStartSLOduration=4.295180746 podStartE2EDuration="51.558413569s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="2026-03-21 09:18:40.163830556 +0000 UTC m=+1223.759028825" lastFinishedPulling="2026-03-21 09:19:27.427063379 +0000 UTC m=+1271.022261648" observedRunningTime="2026-03-21 09:19:29.454795546 +0000 UTC m=+1273.049993815" watchObservedRunningTime="2026-03-21 09:19:29.558413569 +0000 UTC m=+1273.153611838" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.621150 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.738280 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a74ebb-1fc7-4e31-82dc-cb839cceeffb" path="/var/lib/kubelet/pods/19a74ebb-1fc7-4e31-82dc-cb839cceeffb/volumes" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.740917 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829e124a-b451-4e8c-9d46-594c51a71418" path="/var/lib/kubelet/pods/829e124a-b451-4e8c-9d46-594c51a71418/volumes" Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.784274 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bc4cc655d-wmdr9"] Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.974648 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-779f655bb5-55qpq"] Mar 21 09:19:29 crc kubenswrapper[4932]: I0321 09:19:29.988403 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b4f895846-xgmln"] Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.225339 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.225573 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.225612 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.226339 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62297762b526104c5e6a38e2d50dd142e250cf66f3aafdbfb83ac66a7c17e885"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.226402 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://62297762b526104c5e6a38e2d50dd142e250cf66f3aafdbfb83ac66a7c17e885" gracePeriod=600 Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.427012 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" event={"ID":"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8","Type":"ContainerStarted","Data":"a837351577348b2a65f1b4fe8f328d63b54aa040a1660274b381cee298503c9f"} Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.430636 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779f655bb5-55qpq" event={"ID":"9c85be66-46fe-4830-b918-25743e5a86d2","Type":"ContainerStarted","Data":"0ab75a4109d4d49129db7d6ecfa1b5d6247ee6b5d96853bd01a82c7a779e300d"} Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.433515 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98588b9bc-lswzd"] Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.438181 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc4cc655d-wmdr9" event={"ID":"8d312ece-9744-4dc2-be9d-1220beb02bb1","Type":"ContainerStarted","Data":"6ac1f53b9a16e0e03ae578ee2aca1ad42d8a9b9f07b64816197650fb66720fd6"} Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.446278 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="62297762b526104c5e6a38e2d50dd142e250cf66f3aafdbfb83ac66a7c17e885" exitCode=0 Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.446638 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"62297762b526104c5e6a38e2d50dd142e250cf66f3aafdbfb83ac66a7c17e885"} Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.446805 4932 scope.go:117] "RemoveContainer" containerID="91e3049bee861f37df7d289eee8d3ed5ac012bf0c17d73d1859a4aa9a278e9f2" Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.452865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"45de1997-45e0-4e64-b3a3-4d9e9debfb39","Type":"ContainerStarted","Data":"36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8"} Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.452914 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.452926 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"45de1997-45e0-4e64-b3a3-4d9e9debfb39","Type":"ContainerStarted","Data":"e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce"} Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.452938 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"45de1997-45e0-4e64-b3a3-4d9e9debfb39","Type":"ContainerStarted","Data":"69f6ae80444a25f1501d5ff5f267cd63122441fe285b6b3920fec9a625cecbe0"} Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.462917 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f975cdb74-g8tw2"] Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.473079 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.177:9322/\": dial tcp 10.217.0.177:9322: connect: connection refused" Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.480729 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.480708749 podStartE2EDuration="2.480708749s" podCreationTimestamp="2026-03-21 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:30.477151881 +0000 UTC m=+1274.072350160" watchObservedRunningTime="2026-03-21 09:19:30.480708749 +0000 UTC m=+1274.075907018" Mar 21 09:19:30 crc kubenswrapper[4932]: I0321 09:19:30.859964 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.014040 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.511241 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"2d3170911019ffc2d29f28c120dc321f5dd9686c4e0cf0c0353759cae75828b5"} Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.513635 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f975cdb74-g8tw2" event={"ID":"70233835-5b9c-4b42-a3e1-07ccbccfeaf3","Type":"ContainerStarted","Data":"7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c"} Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.513726 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f975cdb74-g8tw2" event={"ID":"70233835-5b9c-4b42-a3e1-07ccbccfeaf3","Type":"ContainerStarted","Data":"f13a2ed6c0fb2ee7e4f1357e9ef564fbab0fba150a4fee7f80ddb2a799864b56"} Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.519055 4932 generic.go:334] "Generic (PLEG): container finished" podID="4181731c-7e65-491c-8bf2-7e7042ad14e3" containerID="3e0ea409fd465250c2a767fbf2c1c6fb405342320c958b7b3dcee33a55c9dfba" exitCode=0 Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.519157 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" event={"ID":"4181731c-7e65-491c-8bf2-7e7042ad14e3","Type":"ContainerDied","Data":"3e0ea409fd465250c2a767fbf2c1c6fb405342320c958b7b3dcee33a55c9dfba"} Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.519201 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" event={"ID":"4181731c-7e65-491c-8bf2-7e7042ad14e3","Type":"ContainerStarted","Data":"35518d0281384e4ce70548c033aa81e4801fba9315cd74f83ce192b6baa501e0"} Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.524962 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc4cc655d-wmdr9" event={"ID":"8d312ece-9744-4dc2-be9d-1220beb02bb1","Type":"ContainerStarted","Data":"44a7668f2fb32e4730b6a3f04dd5bacefc163be817ba5fd3cf834e84cc565ec0"} Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.525041 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc4cc655d-wmdr9" event={"ID":"8d312ece-9744-4dc2-be9d-1220beb02bb1","Type":"ContainerStarted","Data":"5b7eadb4bf05c0f9d7aa9c9052d72fc46aa9b76194f79e09a09f0617a700c27f"} Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.527051 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.527110 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.527123 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.591584 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5bc4cc655d-wmdr9" podStartSLOduration=3.591561601 podStartE2EDuration="3.591561601s" podCreationTimestamp="2026-03-21 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:31.581223533 +0000 UTC m=+1275.176421802" watchObservedRunningTime="2026-03-21 09:19:31.591561601 +0000 UTC m=+1275.186759870" Mar 21 09:19:31 crc kubenswrapper[4932]: I0321 09:19:31.723629 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.028634 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cb57d57b8-l7z46"] Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.033718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.044937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.047896 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.051951 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cb57d57b8-l7z46"] Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.069678 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-config-data\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.069737 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-combined-ca-bundle\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.069864 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg925\" (UniqueName: \"kubernetes.io/projected/7d9b603e-30ad-4c88-995f-1d931e8fbb60-kube-api-access-zg925\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.070009 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-config-data-custom\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.070067 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d9b603e-30ad-4c88-995f-1d931e8fbb60-logs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.070110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-public-tls-certs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.070165 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-internal-tls-certs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.172685 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d9b603e-30ad-4c88-995f-1d931e8fbb60-logs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.172744 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-public-tls-certs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.172785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-internal-tls-certs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.172894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-config-data\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.172923 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-combined-ca-bundle\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.172950 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg925\" (UniqueName: \"kubernetes.io/projected/7d9b603e-30ad-4c88-995f-1d931e8fbb60-kube-api-access-zg925\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.172991 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-config-data-custom\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.173193 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d9b603e-30ad-4c88-995f-1d931e8fbb60-logs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.179658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-combined-ca-bundle\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.179741 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-config-data-custom\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.180078 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-config-data\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.183760 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-public-tls-certs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.187765 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d9b603e-30ad-4c88-995f-1d931e8fbb60-internal-tls-certs\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.194983 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg925\" (UniqueName: \"kubernetes.io/projected/7d9b603e-30ad-4c88-995f-1d931e8fbb60-kube-api-access-zg925\") pod \"barbican-api-5cb57d57b8-l7z46\" (UID: \"7d9b603e-30ad-4c88-995f-1d931e8fbb60\") " pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.377884 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.593612 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f975cdb74-g8tw2" event={"ID":"70233835-5b9c-4b42-a3e1-07ccbccfeaf3","Type":"ContainerStarted","Data":"a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8"} Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.594776 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.594805 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.607478 4932 generic.go:334] "Generic (PLEG): container finished" podID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerID="827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e" exitCode=1 Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.608529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerDied","Data":"827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e"} Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.608575 4932 scope.go:117] "RemoveContainer" containerID="b4cbb14cc83169e2ef7f60038b8fe8862711567a470ea01d74466ea9ed03954d" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.608831 4932 scope.go:117] "RemoveContainer" containerID="827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e" Mar 21 09:19:33 crc kubenswrapper[4932]: E0321 09:19:33.609102 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7d4c5dc9-4c73-486d-9427-2a5f07da9e89)\"" pod="openstack/watcher-decision-engine-0" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.618410 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f975cdb74-g8tw2" podStartSLOduration=5.618393829 podStartE2EDuration="5.618393829s" podCreationTimestamp="2026-03-21 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:33.617568195 +0000 UTC m=+1277.212766464" watchObservedRunningTime="2026-03-21 09:19:33.618393829 +0000 UTC m=+1277.213592098" Mar 21 09:19:33 crc kubenswrapper[4932]: I0321 09:19:33.857262 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.388138 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cb57d57b8-l7z46"] Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.623019 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb57d57b8-l7z46" event={"ID":"7d9b603e-30ad-4c88-995f-1d931e8fbb60","Type":"ContainerStarted","Data":"277f7ea4985adb7ff95d0f558b4ad8e686118c96d532e8d6f396e22e8f67a6b4"} Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.623378 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb57d57b8-l7z46" event={"ID":"7d9b603e-30ad-4c88-995f-1d931e8fbb60","Type":"ContainerStarted","Data":"c8a8378d9ab7eccad9147154bafe05fdd0e09748c1e7cae41178d7161743e113"} Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.625529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" event={"ID":"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8","Type":"ContainerStarted","Data":"9ce9659f36e680c74e32e2de3d4df4e45de5ba4bb2793d3438f355a29cf0d504"} Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.625564 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" event={"ID":"6ac55ca5-9ef6-4157-a91e-49d312d5b2b8","Type":"ContainerStarted","Data":"6c69af44bb00a767c0f8963233bc99562ad6b189ff2a7b048d7835ac5cdc6a00"} Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.629065 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779f655bb5-55qpq" event={"ID":"9c85be66-46fe-4830-b918-25743e5a86d2","Type":"ContainerStarted","Data":"55ed375ec54abe01a0ebded2471cc2d99888f7080eda3c09a7decdd83a39705e"} Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.629106 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779f655bb5-55qpq" event={"ID":"9c85be66-46fe-4830-b918-25743e5a86d2","Type":"ContainerStarted","Data":"200bddb2bd5fea8a179c1a533b56734db9f0e402b463e15feec539f98b579547"} Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.637558 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" event={"ID":"4181731c-7e65-491c-8bf2-7e7042ad14e3","Type":"ContainerStarted","Data":"58c49e2d8620ae87f0dc2cdec7d3fe78c467c74bf39c7187b86fb9c6d5d68fe9"} Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.638530 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.652324 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b4f895846-xgmln" podStartSLOduration=2.850914588 podStartE2EDuration="6.65229373s" podCreationTimestamp="2026-03-21 09:19:28 +0000 UTC" firstStartedPulling="2026-03-21 09:19:30.042513647 +0000 UTC m=+1273.637711906" lastFinishedPulling="2026-03-21 09:19:33.843892779 +0000 UTC m=+1277.439091048" observedRunningTime="2026-03-21 09:19:34.649311638 +0000 UTC m=+1278.244509907" watchObservedRunningTime="2026-03-21 09:19:34.65229373 +0000 UTC m=+1278.247491999" Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.669367 4932 scope.go:117] "RemoveContainer" containerID="827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e" Mar 21 09:19:34 crc kubenswrapper[4932]: E0321 09:19:34.669558 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7d4c5dc9-4c73-486d-9427-2a5f07da9e89)\"" pod="openstack/watcher-decision-engine-0" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.688782 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" podStartSLOduration=6.688762594 podStartE2EDuration="6.688762594s" podCreationTimestamp="2026-03-21 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:34.687472985 +0000 UTC m=+1278.282671254" watchObservedRunningTime="2026-03-21 09:19:34.688762594 +0000 UTC m=+1278.283960863" Mar 21 09:19:34 crc kubenswrapper[4932]: I0321 09:19:34.713192 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-779f655bb5-55qpq" podStartSLOduration=2.867045525 podStartE2EDuration="6.713176416s" podCreationTimestamp="2026-03-21 09:19:28 +0000 UTC" firstStartedPulling="2026-03-21 09:19:29.997571502 +0000 UTC m=+1273.592769771" lastFinishedPulling="2026-03-21 09:19:33.843702393 +0000 UTC m=+1277.438900662" observedRunningTime="2026-03-21 09:19:34.708789561 +0000 UTC m=+1278.303987830" watchObservedRunningTime="2026-03-21 09:19:34.713176416 +0000 UTC m=+1278.308374685" Mar 21 09:19:35 crc kubenswrapper[4932]: I0321 09:19:35.693960 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb57d57b8-l7z46" event={"ID":"7d9b603e-30ad-4c88-995f-1d931e8fbb60","Type":"ContainerStarted","Data":"72e7cc8e4848e6e5380503177a27a2a5a72bebd762a3e252509a9c495b959684"} Mar 21 09:19:35 crc kubenswrapper[4932]: I0321 09:19:35.728552 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cb57d57b8-l7z46" podStartSLOduration=3.7285291149999997 podStartE2EDuration="3.728529115s" podCreationTimestamp="2026-03-21 09:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:35.723589432 +0000 UTC m=+1279.318787701" watchObservedRunningTime="2026-03-21 09:19:35.728529115 +0000 UTC m=+1279.323727384" Mar 21 09:19:36 crc kubenswrapper[4932]: I0321 09:19:35.999980 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 21 09:19:36 crc kubenswrapper[4932]: I0321 09:19:36.703375 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:36 crc kubenswrapper[4932]: I0321 09:19:36.703415 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:37 crc kubenswrapper[4932]: I0321 09:19:37.714126 4932 generic.go:334] "Generic (PLEG): container finished" podID="37076824-e8b6-4b75-aea4-f463d7e50613" containerID="a46ebf1a84e931d7638bdff05c62a8d8b7bbb3453a30a93dcd8914a74a182233" exitCode=0 Mar 21 09:19:37 crc kubenswrapper[4932]: I0321 09:19:37.715609 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vz4hd" event={"ID":"37076824-e8b6-4b75-aea4-f463d7e50613","Type":"ContainerDied","Data":"a46ebf1a84e931d7638bdff05c62a8d8b7bbb3453a30a93dcd8914a74a182233"} Mar 21 09:19:37 crc kubenswrapper[4932]: I0321 09:19:37.741091 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:19:37 crc kubenswrapper[4932]: I0321 09:19:37.741137 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:19:37 crc kubenswrapper[4932]: I0321 09:19:37.743437 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Mar 21 09:19:37 crc kubenswrapper[4932]: I0321 09:19:37.948076 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:19:37 crc kubenswrapper[4932]: I0321 09:19:37.949074 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:19:37 crc kubenswrapper[4932]: I0321 09:19:37.950138 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Mar 21 09:19:38 crc kubenswrapper[4932]: I0321 09:19:38.856744 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 21 09:19:38 crc kubenswrapper[4932]: I0321 09:19:38.865276 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.167780 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.238262 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fdc97999-qqf4w"] Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.238557 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" podUID="800026eb-fe3e-4b45-b809-3ec63f7143c9" containerName="dnsmasq-dns" containerID="cri-o://3dd33b9a361075e6cabd684aa076297b4bf922e7ccc60277f6c9f8ca39445834" gracePeriod=10 Mar 21 09:19:39 crc kubenswrapper[4932]: W0321 09:19:39.561525 4932 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d4c5dc9_4c73_486d_9427_2a5f07da9e89.slice/crio-conmon-827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d4c5dc9_4c73_486d_9427_2a5f07da9e89.slice/crio-conmon-827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e.scope: no such file or directory Mar 21 09:19:39 crc kubenswrapper[4932]: W0321 09:19:39.561587 4932 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d4c5dc9_4c73_486d_9427_2a5f07da9e89.slice/crio-827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d4c5dc9_4c73_486d_9427_2a5f07da9e89.slice/crio-827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e.scope: no such file or directory Mar 21 09:19:39 crc kubenswrapper[4932]: W0321 09:19:39.561606 4932 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13285608_51c1_4307_a442_e0cd0e881385.slice/crio-conmon-3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13285608_51c1_4307_a442_e0cd0e881385.slice/crio-conmon-3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c.scope: no such file or directory Mar 21 09:19:39 crc kubenswrapper[4932]: W0321 09:19:39.561625 4932 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2137f88_2dc2_4718_bd8d_229745974b9a.slice/crio-conmon-8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2137f88_2dc2_4718_bd8d_229745974b9a.slice/crio-conmon-8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58.scope: no such file or directory Mar 21 09:19:39 crc kubenswrapper[4932]: W0321 09:19:39.561642 4932 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13285608_51c1_4307_a442_e0cd0e881385.slice/crio-3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13285608_51c1_4307_a442_e0cd0e881385.slice/crio-3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c.scope: no such file or directory Mar 21 09:19:39 crc kubenswrapper[4932]: W0321 09:19:39.561663 4932 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2137f88_2dc2_4718_bd8d_229745974b9a.slice/crio-8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2137f88_2dc2_4718_bd8d_229745974b9a.slice/crio-8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58.scope: no such file or directory Mar 21 09:19:39 crc kubenswrapper[4932]: W0321 09:19:39.570111 4932 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37076824_e8b6_4b75_aea4_f463d7e50613.slice/crio-conmon-a46ebf1a84e931d7638bdff05c62a8d8b7bbb3453a30a93dcd8914a74a182233.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37076824_e8b6_4b75_aea4_f463d7e50613.slice/crio-conmon-a46ebf1a84e931d7638bdff05c62a8d8b7bbb3453a30a93dcd8914a74a182233.scope: no such file or directory Mar 21 09:19:39 crc kubenswrapper[4932]: W0321 09:19:39.570308 4932 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37076824_e8b6_4b75_aea4_f463d7e50613.slice/crio-a46ebf1a84e931d7638bdff05c62a8d8b7bbb3453a30a93dcd8914a74a182233.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37076824_e8b6_4b75_aea4_f463d7e50613.slice/crio-a46ebf1a84e931d7638bdff05c62a8d8b7bbb3453a30a93dcd8914a74a182233.scope: no such file or directory Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.750275 4932 generic.go:334] "Generic (PLEG): container finished" podID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerID="5f7c09f5d03374096bf0a5b6ce90263e7e3989080c74f976cb78e4e07d77f7e5" exitCode=137 Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.750364 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c74d85757-kjlwf" event={"ID":"e8adb280-44b6-4fb9-b358-aa75af003a44","Type":"ContainerDied","Data":"5f7c09f5d03374096bf0a5b6ce90263e7e3989080c74f976cb78e4e07d77f7e5"} Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.752962 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58" exitCode=1 Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.753030 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58"} Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.753059 4932 scope.go:117] "RemoveContainer" containerID="e34569c65ef63ac1abf4d70b584148f3284f2ae7d7ba5186f85a28c1f2539f80" Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.754425 4932 scope.go:117] "RemoveContainer" containerID="8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58" Mar 21 09:19:39 crc kubenswrapper[4932]: E0321 09:19:39.754710 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.758286 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c" exitCode=1 Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.758413 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c"} Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.759765 4932 scope.go:117] "RemoveContainer" containerID="3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c" Mar 21 09:19:39 crc kubenswrapper[4932]: E0321 09:19:39.760183 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.765317 4932 generic.go:334] "Generic (PLEG): container finished" podID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerID="3e2b9cd771e5caba7f89839233dbdcbc1d9554d932a1f6a18b1d5fcfc6cf5be3" exitCode=137 Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.765419 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc94d857-qzbl4" event={"ID":"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da","Type":"ContainerDied","Data":"3e2b9cd771e5caba7f89839233dbdcbc1d9554d932a1f6a18b1d5fcfc6cf5be3"} Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.767688 4932 generic.go:334] "Generic (PLEG): container finished" podID="800026eb-fe3e-4b45-b809-3ec63f7143c9" containerID="3dd33b9a361075e6cabd684aa076297b4bf922e7ccc60277f6c9f8ca39445834" exitCode=0 Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.767740 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" event={"ID":"800026eb-fe3e-4b45-b809-3ec63f7143c9","Type":"ContainerDied","Data":"3dd33b9a361075e6cabd684aa076297b4bf922e7ccc60277f6c9f8ca39445834"} Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.770328 4932 generic.go:334] "Generic (PLEG): container finished" podID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerID="4b0ecc8b5074317b7cb53990aa1aa7206b8c7ba277b907a9656ef1ce0b4099ab" exitCode=137 Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.771666 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bfc8fff89-rv95c" event={"ID":"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a","Type":"ContainerDied","Data":"4b0ecc8b5074317b7cb53990aa1aa7206b8c7ba277b907a9656ef1ce0b4099ab"} Mar 21 09:19:39 crc kubenswrapper[4932]: I0321 09:19:39.785647 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 21 09:19:39 crc kubenswrapper[4932]: E0321 09:19:39.810026 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod829e124a_b451_4e8c_9d46_594c51a71418.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cd0a8ae_3458_4d4c_8f62_ab7e4ccd81da.slice/crio-conmon-3e2b9cd771e5caba7f89839233dbdcbc1d9554d932a1f6a18b1d5fcfc6cf5be3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba11bdb_a9d2_414d_b2df_3eaedd97df7e.slice/crio-42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8044dc63_0327_41d4_93fe_af2287271a84.slice/crio-conmon-62297762b526104c5e6a38e2d50dd142e250cf66f3aafdbfb83ac66a7c17e885.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19a74ebb_1fc7_4e31_82dc_cb839cceeffb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod800026eb_fe3e_4b45_b809_3ec63f7143c9.slice/crio-conmon-3dd33b9a361075e6cabd684aa076297b4bf922e7ccc60277f6c9f8ca39445834.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd63fc3d6_b6f3_44a8_b251_9dda2e82ed3a.slice/crio-conmon-4b0ecc8b5074317b7cb53990aa1aa7206b8c7ba277b907a9656ef1ce0b4099ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8adb280_44b6_4fb9_b358_aa75af003a44.slice/crio-5f7c09f5d03374096bf0a5b6ce90263e7e3989080c74f976cb78e4e07d77f7e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8044dc63_0327_41d4_93fe_af2287271a84.slice/crio-62297762b526104c5e6a38e2d50dd142e250cf66f3aafdbfb83ac66a7c17e885.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod829e124a_b451_4e8c_9d46_594c51a71418.slice/crio-7fb811bc183161bdcd2298e58dab138b3c3d03aedbc810eac42bb304cc029633\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod800026eb_fe3e_4b45_b809_3ec63f7143c9.slice/crio-3dd33b9a361075e6cabd684aa076297b4bf922e7ccc60277f6c9f8ca39445834.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19a74ebb_1fc7_4e31_82dc_cb839cceeffb.slice/crio-5aa1ddc866fe0a5464d3e3665373245af140c05e6b683795ac016055b4b562f3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd63fc3d6_b6f3_44a8_b251_9dda2e82ed3a.slice/crio-4b0ecc8b5074317b7cb53990aa1aa7206b8c7ba277b907a9656ef1ce0b4099ab.scope\": RecentStats: unable to find data in memory cache]" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.034378 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.168019 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-scripts\") pod \"37076824-e8b6-4b75-aea4-f463d7e50613\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.168364 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37076824-e8b6-4b75-aea4-f463d7e50613-etc-machine-id\") pod \"37076824-e8b6-4b75-aea4-f463d7e50613\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.168445 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-config-data\") pod \"37076824-e8b6-4b75-aea4-f463d7e50613\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.168472 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-db-sync-config-data\") pod \"37076824-e8b6-4b75-aea4-f463d7e50613\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.168575 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-combined-ca-bundle\") pod \"37076824-e8b6-4b75-aea4-f463d7e50613\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.168641 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg2c8\" (UniqueName: \"kubernetes.io/projected/37076824-e8b6-4b75-aea4-f463d7e50613-kube-api-access-zg2c8\") pod \"37076824-e8b6-4b75-aea4-f463d7e50613\" (UID: \"37076824-e8b6-4b75-aea4-f463d7e50613\") " Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.168849 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37076824-e8b6-4b75-aea4-f463d7e50613-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37076824-e8b6-4b75-aea4-f463d7e50613" (UID: "37076824-e8b6-4b75-aea4-f463d7e50613"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.169055 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37076824-e8b6-4b75-aea4-f463d7e50613-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.176562 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-scripts" (OuterVolumeSpecName: "scripts") pod "37076824-e8b6-4b75-aea4-f463d7e50613" (UID: "37076824-e8b6-4b75-aea4-f463d7e50613"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.190445 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "37076824-e8b6-4b75-aea4-f463d7e50613" (UID: "37076824-e8b6-4b75-aea4-f463d7e50613"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.192553 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37076824-e8b6-4b75-aea4-f463d7e50613-kube-api-access-zg2c8" (OuterVolumeSpecName: "kube-api-access-zg2c8") pod "37076824-e8b6-4b75-aea4-f463d7e50613" (UID: "37076824-e8b6-4b75-aea4-f463d7e50613"). InnerVolumeSpecName "kube-api-access-zg2c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.222576 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37076824-e8b6-4b75-aea4-f463d7e50613" (UID: "37076824-e8b6-4b75-aea4-f463d7e50613"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.270475 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-config-data" (OuterVolumeSpecName: "config-data") pod "37076824-e8b6-4b75-aea4-f463d7e50613" (UID: "37076824-e8b6-4b75-aea4-f463d7e50613"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.271895 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.271916 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.271927 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.271937 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg2c8\" (UniqueName: \"kubernetes.io/projected/37076824-e8b6-4b75-aea4-f463d7e50613-kube-api-access-zg2c8\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.271947 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37076824-e8b6-4b75-aea4-f463d7e50613-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.801093 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vz4hd" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.802477 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vz4hd" event={"ID":"37076824-e8b6-4b75-aea4-f463d7e50613","Type":"ContainerDied","Data":"b95f64f58609eaf5cdf5a8ce086c6090a0858becddc53aa8d503d25959dc0e42"} Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.802516 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95f64f58609eaf5cdf5a8ce086c6090a0858becddc53aa8d503d25959dc0e42" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.859843 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:40 crc kubenswrapper[4932]: I0321 09:19:40.860546 4932 scope.go:117] "RemoveContainer" containerID="827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e" Mar 21 09:19:40 crc kubenswrapper[4932]: E0321 09:19:40.860887 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7d4c5dc9-4c73-486d-9427-2a5f07da9e89)\"" pod="openstack/watcher-decision-engine-0" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.301336 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:41 crc kubenswrapper[4932]: E0321 09:19:41.302052 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37076824-e8b6-4b75-aea4-f463d7e50613" containerName="cinder-db-sync" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.302065 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="37076824-e8b6-4b75-aea4-f463d7e50613" containerName="cinder-db-sync" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.302285 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="37076824-e8b6-4b75-aea4-f463d7e50613" containerName="cinder-db-sync" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.304389 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.309431 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.309610 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.309761 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-x4frd" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.309840 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.315708 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.389366 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84c4d945f5-9vdcr"] Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.391171 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.398820 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-scripts\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.398907 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcj95\" (UniqueName: \"kubernetes.io/projected/eddafca3-d459-4525-9f44-5e09410a725e-kube-api-access-kcj95\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.398930 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.399014 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.399058 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eddafca3-d459-4525-9f44-5e09410a725e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.399078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.401586 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c4d945f5-9vdcr"] Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501188 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-sb\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501272 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-nb\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501337 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eddafca3-d459-4525-9f44-5e09410a725e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501513 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-svc\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-scripts\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501677 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94xf\" (UniqueName: \"kubernetes.io/projected/03273311-f853-47e1-a73b-649485129727-kube-api-access-c94xf\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501747 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcj95\" (UniqueName: \"kubernetes.io/projected/eddafca3-d459-4525-9f44-5e09410a725e-kube-api-access-kcj95\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501774 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501836 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-swift-storage-0\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.501937 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-config\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.502059 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.503154 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eddafca3-d459-4525-9f44-5e09410a725e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.519077 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.519864 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-scripts\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.520390 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.534383 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.539631 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcj95\" (UniqueName: \"kubernetes.io/projected/eddafca3-d459-4525-9f44-5e09410a725e-kube-api-access-kcj95\") pod \"cinder-scheduler-0\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.569090 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.571217 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.573766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.585284 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.603693 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-svc\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.603793 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swgw\" (UniqueName: \"kubernetes.io/projected/d86fb296-12f2-4f99-b57f-77504f6cda72-kube-api-access-5swgw\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.603823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94xf\" (UniqueName: \"kubernetes.io/projected/03273311-f853-47e1-a73b-649485129727-kube-api-access-c94xf\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.603875 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-swift-storage-0\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.603909 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-scripts\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.603960 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-config\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.603982 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data-custom\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.604030 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.604063 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86fb296-12f2-4f99-b57f-77504f6cda72-logs\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.604116 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.604138 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d86fb296-12f2-4f99-b57f-77504f6cda72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.604183 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-sb\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.604208 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-nb\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.605199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-nb\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.606077 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-svc\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.607061 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-swift-storage-0\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.607701 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-sb\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.607747 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-config\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.636631 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.652774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94xf\" (UniqueName: \"kubernetes.io/projected/03273311-f853-47e1-a73b-649485129727-kube-api-access-c94xf\") pod \"dnsmasq-dns-84c4d945f5-9vdcr\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.708103 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-scripts\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.708185 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data-custom\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.708212 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.708249 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86fb296-12f2-4f99-b57f-77504f6cda72-logs\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.708308 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.708336 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d86fb296-12f2-4f99-b57f-77504f6cda72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.708480 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swgw\" (UniqueName: \"kubernetes.io/projected/d86fb296-12f2-4f99-b57f-77504f6cda72-kube-api-access-5swgw\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.710672 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86fb296-12f2-4f99-b57f-77504f6cda72-logs\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.711578 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-scripts\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.711673 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d86fb296-12f2-4f99-b57f-77504f6cda72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.714887 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.714999 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.719264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.720920 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data-custom\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.728512 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swgw\" (UniqueName: \"kubernetes.io/projected/d86fb296-12f2-4f99-b57f-77504f6cda72-kube-api-access-5swgw\") pod \"cinder-api-0\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " pod="openstack/cinder-api-0" Mar 21 09:19:41 crc kubenswrapper[4932]: I0321 09:19:41.993248 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.011850 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.020615 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.022669 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.024014 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.063536 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.122053 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-config\") pod \"800026eb-fe3e-4b45-b809-3ec63f7143c9\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.122403 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbx7r\" (UniqueName: \"kubernetes.io/projected/800026eb-fe3e-4b45-b809-3ec63f7143c9-kube-api-access-xbx7r\") pod \"800026eb-fe3e-4b45-b809-3ec63f7143c9\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.122550 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-config-data\") pod \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.122648 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8adb280-44b6-4fb9-b358-aa75af003a44-horizon-secret-key\") pod \"e8adb280-44b6-4fb9-b358-aa75af003a44\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.122773 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-scripts\") pod \"e8adb280-44b6-4fb9-b358-aa75af003a44\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.122882 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-config-data\") pod \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.123003 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-logs\") pod \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.123092 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8adb280-44b6-4fb9-b358-aa75af003a44-logs\") pod \"e8adb280-44b6-4fb9-b358-aa75af003a44\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.123188 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2j6r\" (UniqueName: \"kubernetes.io/projected/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-kube-api-access-b2j6r\") pod \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.123274 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-config-data\") pod \"e8adb280-44b6-4fb9-b358-aa75af003a44\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.123397 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-horizon-secret-key\") pod \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.123509 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-swift-storage-0\") pod \"800026eb-fe3e-4b45-b809-3ec63f7143c9\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.123607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-scripts\") pod \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.131700 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbz7s\" (UniqueName: \"kubernetes.io/projected/e8adb280-44b6-4fb9-b358-aa75af003a44-kube-api-access-zbz7s\") pod \"e8adb280-44b6-4fb9-b358-aa75af003a44\" (UID: \"e8adb280-44b6-4fb9-b358-aa75af003a44\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.131944 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-sb\") pod \"800026eb-fe3e-4b45-b809-3ec63f7143c9\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.132044 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-nb\") pod \"800026eb-fe3e-4b45-b809-3ec63f7143c9\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.132135 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-scripts\") pod \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.132215 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-horizon-secret-key\") pod \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.132308 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-logs\") pod \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\" (UID: \"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.132489 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzfp4\" (UniqueName: \"kubernetes.io/projected/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-kube-api-access-mzfp4\") pod \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\" (UID: \"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.132635 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-svc\") pod \"800026eb-fe3e-4b45-b809-3ec63f7143c9\" (UID: \"800026eb-fe3e-4b45-b809-3ec63f7143c9\") " Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.154304 4932 scope.go:117] "RemoveContainer" containerID="76843e78ee1aed96ecca14b093534f9b5aad60e2922cc9c5d20277979884fa14" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.154529 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8adb280-44b6-4fb9-b358-aa75af003a44-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e8adb280-44b6-4fb9-b358-aa75af003a44" (UID: "e8adb280-44b6-4fb9-b358-aa75af003a44"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.161689 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800026eb-fe3e-4b45-b809-3ec63f7143c9-kube-api-access-xbx7r" (OuterVolumeSpecName: "kube-api-access-xbx7r") pod "800026eb-fe3e-4b45-b809-3ec63f7143c9" (UID: "800026eb-fe3e-4b45-b809-3ec63f7143c9"). InnerVolumeSpecName "kube-api-access-xbx7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.168754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-logs" (OuterVolumeSpecName: "logs") pod "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" (UID: "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.168985 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8adb280-44b6-4fb9-b358-aa75af003a44-logs" (OuterVolumeSpecName: "logs") pod "e8adb280-44b6-4fb9-b358-aa75af003a44" (UID: "e8adb280-44b6-4fb9-b358-aa75af003a44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.174341 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-logs" (OuterVolumeSpecName: "logs") pod "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" (UID: "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.180149 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8adb280-44b6-4fb9-b358-aa75af003a44-kube-api-access-zbz7s" (OuterVolumeSpecName: "kube-api-access-zbz7s") pod "e8adb280-44b6-4fb9-b358-aa75af003a44" (UID: "e8adb280-44b6-4fb9-b358-aa75af003a44"). InnerVolumeSpecName "kube-api-access-zbz7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.183628 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" (UID: "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.183909 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-kube-api-access-mzfp4" (OuterVolumeSpecName: "kube-api-access-mzfp4") pod "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" (UID: "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a"). InnerVolumeSpecName "kube-api-access-mzfp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.198657 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" (UID: "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.200699 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-kube-api-access-b2j6r" (OuterVolumeSpecName: "kube-api-access-b2j6r") pod "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" (UID: "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da"). InnerVolumeSpecName "kube-api-access-b2j6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240533 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8adb280-44b6-4fb9-b358-aa75af003a44-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240564 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240584 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8adb280-44b6-4fb9-b358-aa75af003a44-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240597 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2j6r\" (UniqueName: \"kubernetes.io/projected/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-kube-api-access-b2j6r\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240609 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240620 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbz7s\" (UniqueName: \"kubernetes.io/projected/e8adb280-44b6-4fb9-b358-aa75af003a44-kube-api-access-zbz7s\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240631 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240645 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240656 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzfp4\" (UniqueName: \"kubernetes.io/projected/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-kube-api-access-mzfp4\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.240666 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbx7r\" (UniqueName: \"kubernetes.io/projected/800026eb-fe3e-4b45-b809-3ec63f7143c9-kube-api-access-xbx7r\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.251407 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.327416 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-config-data" (OuterVolumeSpecName: "config-data") pod "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" (UID: "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.328669 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-config-data" (OuterVolumeSpecName: "config-data") pod "e8adb280-44b6-4fb9-b358-aa75af003a44" (UID: "e8adb280-44b6-4fb9-b358-aa75af003a44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.357655 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-scripts" (OuterVolumeSpecName: "scripts") pod "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" (UID: "d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.358575 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.360679 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.360753 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.361006 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-config" (OuterVolumeSpecName: "config") pod "800026eb-fe3e-4b45-b809-3ec63f7143c9" (UID: "800026eb-fe3e-4b45-b809-3ec63f7143c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.378986 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "800026eb-fe3e-4b45-b809-3ec63f7143c9" (UID: "800026eb-fe3e-4b45-b809-3ec63f7143c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.402442 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-scripts" (OuterVolumeSpecName: "scripts") pod "e8adb280-44b6-4fb9-b358-aa75af003a44" (UID: "e8adb280-44b6-4fb9-b358-aa75af003a44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.402675 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-config-data" (OuterVolumeSpecName: "config-data") pod "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" (UID: "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.410840 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-scripts" (OuterVolumeSpecName: "scripts") pod "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" (UID: "8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.430469 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "800026eb-fe3e-4b45-b809-3ec63f7143c9" (UID: "800026eb-fe3e-4b45-b809-3ec63f7143c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.443700 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "800026eb-fe3e-4b45-b809-3ec63f7143c9" (UID: "800026eb-fe3e-4b45-b809-3ec63f7143c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.459018 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "800026eb-fe3e-4b45-b809-3ec63f7143c9" (UID: "800026eb-fe3e-4b45-b809-3ec63f7143c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.463175 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8adb280-44b6-4fb9-b358-aa75af003a44-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.463198 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.463208 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.463217 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.463226 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.463238 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.463245 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.463254 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800026eb-fe3e-4b45-b809-3ec63f7143c9-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.700905 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c4d945f5-9vdcr"] Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.888985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" event={"ID":"03273311-f853-47e1-a73b-649485129727","Type":"ContainerStarted","Data":"14b0a49520509811d46e21e5056c067df8f2ed97dba388f0850c2633ccd25f3b"} Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.909434 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc94d857-qzbl4" event={"ID":"8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da","Type":"ContainerDied","Data":"1998f91697b50d767c185915dc37fda47f6e38bdced680305fa42955a425b93d"} Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.909497 4932 scope.go:117] "RemoveContainer" containerID="ca3a0ca9da247fcaa5a0b427775d9990ca1a076bc2e692a300a516361e17118e" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.909686 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc94d857-qzbl4" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.935306 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.952073 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" event={"ID":"800026eb-fe3e-4b45-b809-3ec63f7143c9","Type":"ContainerDied","Data":"9ff4da7264561b4cc747d82cb024b51756738f1362f92b2c1249892d40fca366"} Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.952204 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fdc97999-qqf4w" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.964586 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bfc8fff89-rv95c" event={"ID":"d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a","Type":"ContainerDied","Data":"c58f397ff3d37c1c7d09eb59ff18d266a00bd82682101922174dfd8c049e0bd6"} Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.964711 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bfc8fff89-rv95c" Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.967775 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c74d85757-kjlwf" event={"ID":"e8adb280-44b6-4fb9-b358-aa75af003a44","Type":"ContainerDied","Data":"333601fa523ccfb7121cb5034efe3a6e1d2470fae8418c3ce83653e938998446"} Mar 21 09:19:42 crc kubenswrapper[4932]: I0321 09:19:42.967992 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c74d85757-kjlwf" Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.089036 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bfc8fff89-rv95c"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.131982 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bfc8fff89-rv95c"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.153422 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c74d85757-kjlwf"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.167083 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c74d85757-kjlwf"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.177484 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dc94d857-qzbl4"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.184054 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7dc94d857-qzbl4"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.196153 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fdc97999-qqf4w"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.215368 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fdc97999-qqf4w"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.232167 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.291049 4932 scope.go:117] "RemoveContainer" containerID="3e2b9cd771e5caba7f89839233dbdcbc1d9554d932a1f6a18b1d5fcfc6cf5be3" Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.441258 4932 scope.go:117] "RemoveContainer" containerID="3dd33b9a361075e6cabd684aa076297b4bf922e7ccc60277f6c9f8ca39445834" Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.557308 4932 scope.go:117] "RemoveContainer" containerID="bb4b2734bd8629ef3d087f04c25b570e686de588a1b3dee6ed36352cfa292f9e" Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.718721 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800026eb-fe3e-4b45-b809-3ec63f7143c9" path="/var/lib/kubelet/pods/800026eb-fe3e-4b45-b809-3ec63f7143c9/volumes" Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.719794 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" path="/var/lib/kubelet/pods/8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da/volumes" Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.720651 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" path="/var/lib/kubelet/pods/d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a/volumes" Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.722202 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" path="/var/lib/kubelet/pods/e8adb280-44b6-4fb9-b358-aa75af003a44/volumes" Mar 21 09:19:43 crc kubenswrapper[4932]: I0321 09:19:43.739458 4932 scope.go:117] "RemoveContainer" containerID="6227dbe5e60e32f4f6a2f54f08549059b5b6bbce749531e48d9e7d352ee47a40" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.003687 4932 generic.go:334] "Generic (PLEG): container finished" podID="03273311-f853-47e1-a73b-649485129727" containerID="bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888" exitCode=0 Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.003741 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" event={"ID":"03273311-f853-47e1-a73b-649485129727","Type":"ContainerDied","Data":"bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888"} Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.052227 4932 scope.go:117] "RemoveContainer" containerID="4b0ecc8b5074317b7cb53990aa1aa7206b8c7ba277b907a9656ef1ce0b4099ab" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.058303 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerStarted","Data":"b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938"} Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.058478 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="ceilometer-central-agent" containerID="cri-o://8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552" gracePeriod=30 Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.058688 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.058995 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="proxy-httpd" containerID="cri-o://b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938" gracePeriod=30 Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.058999 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="sg-core" containerID="cri-o://c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02" gracePeriod=30 Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.059035 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="ceilometer-notification-agent" containerID="cri-o://878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371" gracePeriod=30 Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.070953 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d86fb296-12f2-4f99-b57f-77504f6cda72","Type":"ContainerStarted","Data":"f5094d0107f92e9e374884715149caf9f08d962f229043e1dc3ce617eaa88b5f"} Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.073262 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eddafca3-d459-4525-9f44-5e09410a725e","Type":"ContainerStarted","Data":"f8d21366849bd2e9da300d67cdd5931230824204cf8c579fa115d7f57985f699"} Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.094529 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.950904431 podStartE2EDuration="1m6.088079961s" podCreationTimestamp="2026-03-21 09:18:38 +0000 UTC" firstStartedPulling="2026-03-21 09:18:40.194797005 +0000 UTC m=+1223.789995274" lastFinishedPulling="2026-03-21 09:19:42.331972535 +0000 UTC m=+1285.927170804" observedRunningTime="2026-03-21 09:19:44.080888689 +0000 UTC m=+1287.676086958" watchObservedRunningTime="2026-03-21 09:19:44.088079961 +0000 UTC m=+1287.683278230" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.325738 4932 scope.go:117] "RemoveContainer" containerID="796c29fb749f6c917d0af3192300a74bb9e33e2a8c17b2fa1475434d5224ae0d" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.368370 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.456694 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.705361 4932 scope.go:117] "RemoveContainer" containerID="5f7c09f5d03374096bf0a5b6ce90263e7e3989080c74f976cb78e4e07d77f7e5" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.786561 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86b8b664df-8nqhr"] Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.787186 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86b8b664df-8nqhr" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-api" containerID="cri-o://13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4" gracePeriod=30 Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.787618 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86b8b664df-8nqhr" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-httpd" containerID="cri-o://fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926" gracePeriod=30 Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.825428 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882042 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75cc58bddf-lsnvj"] Mar 21 09:19:44 crc kubenswrapper[4932]: E0321 09:19:44.882656 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882679 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: E0321 09:19:44.882698 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882706 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: E0321 09:19:44.882726 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882733 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: E0321 09:19:44.882751 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800026eb-fe3e-4b45-b809-3ec63f7143c9" containerName="init" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882758 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="800026eb-fe3e-4b45-b809-3ec63f7143c9" containerName="init" Mar 21 09:19:44 crc kubenswrapper[4932]: E0321 09:19:44.882780 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800026eb-fe3e-4b45-b809-3ec63f7143c9" containerName="dnsmasq-dns" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882787 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="800026eb-fe3e-4b45-b809-3ec63f7143c9" containerName="dnsmasq-dns" Mar 21 09:19:44 crc kubenswrapper[4932]: E0321 09:19:44.882798 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882805 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: E0321 09:19:44.882831 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882838 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: E0321 09:19:44.882850 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.882857 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.883070 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.883088 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="800026eb-fe3e-4b45-b809-3ec63f7143c9" containerName="dnsmasq-dns" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.883108 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.883121 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerName="horizon" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.883141 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8adb280-44b6-4fb9-b358-aa75af003a44" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.883155 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd0a8ae-3458-4d4c-8f62-ab7e4ccd81da" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.883166 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63fc3d6-b6f3-44a8-b251-9dda2e82ed3a" containerName="horizon-log" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.884558 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.893727 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75cc58bddf-lsnvj"] Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.983019 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-public-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.983113 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-combined-ca-bundle\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.983135 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-internal-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.983151 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c66nb\" (UniqueName: \"kubernetes.io/projected/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-kube-api-access-c66nb\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.983175 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-config\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.983219 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-httpd-config\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:44 crc kubenswrapper[4932]: I0321 09:19:44.983296 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-ovndb-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.085234 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-combined-ca-bundle\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.085277 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-internal-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.085294 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c66nb\" (UniqueName: \"kubernetes.io/projected/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-kube-api-access-c66nb\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.085319 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-config\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.085389 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-httpd-config\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.085457 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-ovndb-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.085504 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-public-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.093054 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-combined-ca-bundle\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.098262 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-internal-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.105664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-config\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.108014 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-ovndb-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.108498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-public-tls-certs\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.108941 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-httpd-config\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.112119 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c66nb\" (UniqueName: \"kubernetes.io/projected/5fd863c3-5f3c-4d25-96db-9a8154ecedcf-kube-api-access-c66nb\") pod \"neutron-75cc58bddf-lsnvj\" (UID: \"5fd863c3-5f3c-4d25-96db-9a8154ecedcf\") " pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.112693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d86fb296-12f2-4f99-b57f-77504f6cda72","Type":"ContainerStarted","Data":"a24e799129588e52243732bfbb5034fd0154bcbc909ecf29f388cc8b5729bcc6"} Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.122075 4932 generic.go:334] "Generic (PLEG): container finished" podID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerID="fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926" exitCode=0 Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.122252 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b8b664df-8nqhr" event={"ID":"76427553-0e6c-4a84-820e-34fcfe6732a4","Type":"ContainerDied","Data":"fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926"} Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.133548 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" event={"ID":"03273311-f853-47e1-a73b-649485129727","Type":"ContainerStarted","Data":"a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29"} Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.133707 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.140324 4932 generic.go:334] "Generic (PLEG): container finished" podID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerID="b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938" exitCode=0 Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.140358 4932 generic.go:334] "Generic (PLEG): container finished" podID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerID="c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02" exitCode=2 Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.140366 4932 generic.go:334] "Generic (PLEG): container finished" podID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerID="8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552" exitCode=0 Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.140383 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerDied","Data":"b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938"} Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.140401 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerDied","Data":"c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02"} Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.140410 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerDied","Data":"8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552"} Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.163552 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" podStartSLOduration=4.163528672 podStartE2EDuration="4.163528672s" podCreationTimestamp="2026-03-21 09:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:45.15568669 +0000 UTC m=+1288.750884959" watchObservedRunningTime="2026-03-21 09:19:45.163528672 +0000 UTC m=+1288.758726941" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.357241 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.414753 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:45 crc kubenswrapper[4932]: I0321 09:19:45.987342 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cb57d57b8-l7z46" Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.078403 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75cc58bddf-lsnvj"] Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.092486 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f975cdb74-g8tw2"] Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.092746 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f975cdb74-g8tw2" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api-log" containerID="cri-o://7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c" gracePeriod=30 Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.093241 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f975cdb74-g8tw2" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api" containerID="cri-o://a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8" gracePeriod=30 Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.114586 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6f975cdb74-g8tw2" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": EOF" Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.215603 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eddafca3-d459-4525-9f44-5e09410a725e","Type":"ContainerStarted","Data":"41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15"} Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.215648 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eddafca3-d459-4525-9f44-5e09410a725e","Type":"ContainerStarted","Data":"3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c"} Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.229551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d86fb296-12f2-4f99-b57f-77504f6cda72","Type":"ContainerStarted","Data":"02f245eec590df94d02ae7a55e200524c3b5009f00be1fe47a6b01a24a3b38d6"} Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.229706 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerName="cinder-api-log" containerID="cri-o://a24e799129588e52243732bfbb5034fd0154bcbc909ecf29f388cc8b5729bcc6" gracePeriod=30 Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.229935 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.229968 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerName="cinder-api" containerID="cri-o://02f245eec590df94d02ae7a55e200524c3b5009f00be1fe47a6b01a24a3b38d6" gracePeriod=30 Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.234585 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cc58bddf-lsnvj" event={"ID":"5fd863c3-5f3c-4d25-96db-9a8154ecedcf","Type":"ContainerStarted","Data":"f0b08f56a0ab3d66ba77f1c27d99768b61f44a6700ac53a8030154028c962415"} Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.304526 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.566107308 podStartE2EDuration="5.304504452s" podCreationTimestamp="2026-03-21 09:19:41 +0000 UTC" firstStartedPulling="2026-03-21 09:19:43.001384954 +0000 UTC m=+1286.596583223" lastFinishedPulling="2026-03-21 09:19:43.739782098 +0000 UTC m=+1287.334980367" observedRunningTime="2026-03-21 09:19:46.276579901 +0000 UTC m=+1289.871778170" watchObservedRunningTime="2026-03-21 09:19:46.304504452 +0000 UTC m=+1289.899702721" Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.375281 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.375263573 podStartE2EDuration="5.375263573s" podCreationTimestamp="2026-03-21 09:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:46.329863023 +0000 UTC m=+1289.925061292" watchObservedRunningTime="2026-03-21 09:19:46.375263573 +0000 UTC m=+1289.970461842" Mar 21 09:19:46 crc kubenswrapper[4932]: I0321 09:19:46.640200 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.231694 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.255962 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-httpd-config\") pod \"76427553-0e6c-4a84-820e-34fcfe6732a4\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.256048 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-combined-ca-bundle\") pod \"76427553-0e6c-4a84-820e-34fcfe6732a4\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.256116 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-public-tls-certs\") pod \"76427553-0e6c-4a84-820e-34fcfe6732a4\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.256179 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-config\") pod \"76427553-0e6c-4a84-820e-34fcfe6732a4\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.256212 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-ovndb-tls-certs\") pod \"76427553-0e6c-4a84-820e-34fcfe6732a4\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.256319 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-internal-tls-certs\") pod \"76427553-0e6c-4a84-820e-34fcfe6732a4\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.256481 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2849\" (UniqueName: \"kubernetes.io/projected/76427553-0e6c-4a84-820e-34fcfe6732a4-kube-api-access-l2849\") pod \"76427553-0e6c-4a84-820e-34fcfe6732a4\" (UID: \"76427553-0e6c-4a84-820e-34fcfe6732a4\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.286712 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cc58bddf-lsnvj" event={"ID":"5fd863c3-5f3c-4d25-96db-9a8154ecedcf","Type":"ContainerStarted","Data":"cd0af1e920bc7b24ae7c83610d83657de5f21d1040e4a75523bc824ebbe66b12"} Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.286784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cc58bddf-lsnvj" event={"ID":"5fd863c3-5f3c-4d25-96db-9a8154ecedcf","Type":"ContainerStarted","Data":"aa2b0a18c4b6af569e03ccc0feb718d8632c826e8dbe5a208e5597cf1f4e0043"} Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.287424 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.317833 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76427553-0e6c-4a84-820e-34fcfe6732a4-kube-api-access-l2849" (OuterVolumeSpecName: "kube-api-access-l2849") pod "76427553-0e6c-4a84-820e-34fcfe6732a4" (UID: "76427553-0e6c-4a84-820e-34fcfe6732a4"). InnerVolumeSpecName "kube-api-access-l2849". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.322813 4932 generic.go:334] "Generic (PLEG): container finished" podID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerID="7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c" exitCode=143 Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.322883 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f975cdb74-g8tw2" event={"ID":"70233835-5b9c-4b42-a3e1-07ccbccfeaf3","Type":"ContainerDied","Data":"7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c"} Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.325648 4932 generic.go:334] "Generic (PLEG): container finished" podID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerID="02f245eec590df94d02ae7a55e200524c3b5009f00be1fe47a6b01a24a3b38d6" exitCode=0 Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.325686 4932 generic.go:334] "Generic (PLEG): container finished" podID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerID="a24e799129588e52243732bfbb5034fd0154bcbc909ecf29f388cc8b5729bcc6" exitCode=143 Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.325743 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d86fb296-12f2-4f99-b57f-77504f6cda72","Type":"ContainerDied","Data":"02f245eec590df94d02ae7a55e200524c3b5009f00be1fe47a6b01a24a3b38d6"} Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.325776 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d86fb296-12f2-4f99-b57f-77504f6cda72","Type":"ContainerDied","Data":"a24e799129588e52243732bfbb5034fd0154bcbc909ecf29f388cc8b5729bcc6"} Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.329298 4932 generic.go:334] "Generic (PLEG): container finished" podID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerID="13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4" exitCode=0 Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.330763 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b8b664df-8nqhr" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.331441 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b8b664df-8nqhr" event={"ID":"76427553-0e6c-4a84-820e-34fcfe6732a4","Type":"ContainerDied","Data":"13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4"} Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.331473 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b8b664df-8nqhr" event={"ID":"76427553-0e6c-4a84-820e-34fcfe6732a4","Type":"ContainerDied","Data":"ee669e59dfbc017f81e9e47583ea253261911edf9cca3572a58609e18ce9a8c8"} Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.331493 4932 scope.go:117] "RemoveContainer" containerID="fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.338272 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "76427553-0e6c-4a84-820e-34fcfe6732a4" (UID: "76427553-0e6c-4a84-820e-34fcfe6732a4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.359287 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2849\" (UniqueName: \"kubernetes.io/projected/76427553-0e6c-4a84-820e-34fcfe6732a4-kube-api-access-l2849\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.359516 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.393685 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.401959 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75cc58bddf-lsnvj" podStartSLOduration=3.401833758 podStartE2EDuration="3.401833758s" podCreationTimestamp="2026-03-21 09:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:47.322013689 +0000 UTC m=+1290.917211958" watchObservedRunningTime="2026-03-21 09:19:47.401833758 +0000 UTC m=+1290.997032027" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.427564 4932 scope.go:117] "RemoveContainer" containerID="13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.433227 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "76427553-0e6c-4a84-820e-34fcfe6732a4" (UID: "76427553-0e6c-4a84-820e-34fcfe6732a4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.437135 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-config" (OuterVolumeSpecName: "config") pod "76427553-0e6c-4a84-820e-34fcfe6732a4" (UID: "76427553-0e6c-4a84-820e-34fcfe6732a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.449170 4932 scope.go:117] "RemoveContainer" containerID="fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926" Mar 21 09:19:47 crc kubenswrapper[4932]: E0321 09:19:47.449614 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926\": container with ID starting with fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926 not found: ID does not exist" containerID="fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.449662 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926"} err="failed to get container status \"fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926\": rpc error: code = NotFound desc = could not find container \"fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926\": container with ID starting with fe85b9011ffdf7685dbc0bab550016d1b6d418798d67d69a3424c7ec34c6b926 not found: ID does not exist" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.449704 4932 scope.go:117] "RemoveContainer" containerID="13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4" Mar 21 09:19:47 crc kubenswrapper[4932]: E0321 09:19:47.452213 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4\": container with ID starting with 13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4 not found: ID does not exist" containerID="13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.452252 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4"} err="failed to get container status \"13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4\": rpc error: code = NotFound desc = could not find container \"13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4\": container with ID starting with 13b8dee1117a705419279c32b2408d3355e904175e8e41169be6326bee16e2a4 not found: ID does not exist" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.457609 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "76427553-0e6c-4a84-820e-34fcfe6732a4" (UID: "76427553-0e6c-4a84-820e-34fcfe6732a4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.458315 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76427553-0e6c-4a84-820e-34fcfe6732a4" (UID: "76427553-0e6c-4a84-820e-34fcfe6732a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.460610 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data\") pod \"d86fb296-12f2-4f99-b57f-77504f6cda72\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.460695 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data-custom\") pod \"d86fb296-12f2-4f99-b57f-77504f6cda72\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.460728 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86fb296-12f2-4f99-b57f-77504f6cda72-logs\") pod \"d86fb296-12f2-4f99-b57f-77504f6cda72\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.460762 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-combined-ca-bundle\") pod \"d86fb296-12f2-4f99-b57f-77504f6cda72\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.460850 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5swgw\" (UniqueName: \"kubernetes.io/projected/d86fb296-12f2-4f99-b57f-77504f6cda72-kube-api-access-5swgw\") pod \"d86fb296-12f2-4f99-b57f-77504f6cda72\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.460977 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d86fb296-12f2-4f99-b57f-77504f6cda72-etc-machine-id\") pod \"d86fb296-12f2-4f99-b57f-77504f6cda72\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.461032 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-scripts\") pod \"d86fb296-12f2-4f99-b57f-77504f6cda72\" (UID: \"d86fb296-12f2-4f99-b57f-77504f6cda72\") " Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.461494 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.461511 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.461522 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.461533 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.462230 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d86fb296-12f2-4f99-b57f-77504f6cda72-logs" (OuterVolumeSpecName: "logs") pod "d86fb296-12f2-4f99-b57f-77504f6cda72" (UID: "d86fb296-12f2-4f99-b57f-77504f6cda72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.463488 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d86fb296-12f2-4f99-b57f-77504f6cda72-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d86fb296-12f2-4f99-b57f-77504f6cda72" (UID: "d86fb296-12f2-4f99-b57f-77504f6cda72"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.464682 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-scripts" (OuterVolumeSpecName: "scripts") pod "d86fb296-12f2-4f99-b57f-77504f6cda72" (UID: "d86fb296-12f2-4f99-b57f-77504f6cda72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.466886 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d86fb296-12f2-4f99-b57f-77504f6cda72" (UID: "d86fb296-12f2-4f99-b57f-77504f6cda72"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.467471 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d86fb296-12f2-4f99-b57f-77504f6cda72-kube-api-access-5swgw" (OuterVolumeSpecName: "kube-api-access-5swgw") pod "d86fb296-12f2-4f99-b57f-77504f6cda72" (UID: "d86fb296-12f2-4f99-b57f-77504f6cda72"). InnerVolumeSpecName "kube-api-access-5swgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.475067 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "76427553-0e6c-4a84-820e-34fcfe6732a4" (UID: "76427553-0e6c-4a84-820e-34fcfe6732a4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.488818 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d86fb296-12f2-4f99-b57f-77504f6cda72" (UID: "d86fb296-12f2-4f99-b57f-77504f6cda72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.515851 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data" (OuterVolumeSpecName: "config-data") pod "d86fb296-12f2-4f99-b57f-77504f6cda72" (UID: "d86fb296-12f2-4f99-b57f-77504f6cda72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.563089 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d86fb296-12f2-4f99-b57f-77504f6cda72-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.563162 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.563178 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.563190 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.563203 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d86fb296-12f2-4f99-b57f-77504f6cda72-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.563216 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86fb296-12f2-4f99-b57f-77504f6cda72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.563228 4932 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76427553-0e6c-4a84-820e-34fcfe6732a4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.563240 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5swgw\" (UniqueName: \"kubernetes.io/projected/d86fb296-12f2-4f99-b57f-77504f6cda72-kube-api-access-5swgw\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.668720 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86b8b664df-8nqhr"] Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.677914 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86b8b664df-8nqhr"] Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.715685 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" path="/var/lib/kubelet/pods/76427553-0e6c-4a84-820e-34fcfe6732a4/volumes" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.740500 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.741202 4932 scope.go:117] "RemoveContainer" containerID="3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c" Mar 21 09:19:47 crc kubenswrapper[4932]: E0321 09:19:47.741431 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.741474 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.948222 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.948318 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:19:47 crc kubenswrapper[4932]: I0321 09:19:47.949329 4932 scope.go:117] "RemoveContainer" containerID="8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58" Mar 21 09:19:47 crc kubenswrapper[4932]: E0321 09:19:47.949605 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.348123 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.348144 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d86fb296-12f2-4f99-b57f-77504f6cda72","Type":"ContainerDied","Data":"f5094d0107f92e9e374884715149caf9f08d962f229043e1dc3ce617eaa88b5f"} Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.348213 4932 scope.go:117] "RemoveContainer" containerID="02f245eec590df94d02ae7a55e200524c3b5009f00be1fe47a6b01a24a3b38d6" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.354040 4932 scope.go:117] "RemoveContainer" containerID="8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58" Mar 21 09:19:48 crc kubenswrapper[4932]: E0321 09:19:48.354264 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.354564 4932 scope.go:117] "RemoveContainer" containerID="3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c" Mar 21 09:19:48 crc kubenswrapper[4932]: E0321 09:19:48.354744 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.386419 4932 scope.go:117] "RemoveContainer" containerID="a24e799129588e52243732bfbb5034fd0154bcbc909ecf29f388cc8b5729bcc6" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.386547 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.424320 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.469963 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:48 crc kubenswrapper[4932]: E0321 09:19:48.470447 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerName="cinder-api-log" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.470460 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerName="cinder-api-log" Mar 21 09:19:48 crc kubenswrapper[4932]: E0321 09:19:48.470472 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-api" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.470479 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-api" Mar 21 09:19:48 crc kubenswrapper[4932]: E0321 09:19:48.470501 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerName="cinder-api" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.470507 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerName="cinder-api" Mar 21 09:19:48 crc kubenswrapper[4932]: E0321 09:19:48.470523 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-httpd" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.470529 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-httpd" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.470727 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerName="cinder-api" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.470742 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-httpd" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.470750 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-api" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.470761 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" containerName="cinder-api-log" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.472507 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.480567 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.480736 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.480846 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.496601 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582240 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/250ff311-5acb-4def-9d7c-ead2d48f29bd-logs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582330 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582510 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvmz5\" (UniqueName: \"kubernetes.io/projected/250ff311-5acb-4def-9d7c-ead2d48f29bd-kube-api-access-gvmz5\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582554 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/250ff311-5acb-4def-9d7c-ead2d48f29bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582694 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-config-data\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.582845 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-scripts\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684297 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684383 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684436 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684474 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvmz5\" (UniqueName: \"kubernetes.io/projected/250ff311-5acb-4def-9d7c-ead2d48f29bd-kube-api-access-gvmz5\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684511 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/250ff311-5acb-4def-9d7c-ead2d48f29bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684549 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684582 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-config-data\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684606 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-scripts\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.684650 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/250ff311-5acb-4def-9d7c-ead2d48f29bd-logs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.685020 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/250ff311-5acb-4def-9d7c-ead2d48f29bd-logs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.685648 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/250ff311-5acb-4def-9d7c-ead2d48f29bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.689309 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.689997 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.691512 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-scripts\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.694120 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.694384 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-config-data\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.698756 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/250ff311-5acb-4def-9d7c-ead2d48f29bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.709330 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvmz5\" (UniqueName: \"kubernetes.io/projected/250ff311-5acb-4def-9d7c-ead2d48f29bd-kube-api-access-gvmz5\") pod \"cinder-api-0\" (UID: \"250ff311-5acb-4def-9d7c-ead2d48f29bd\") " pod="openstack/cinder-api-0" Mar 21 09:19:48 crc kubenswrapper[4932]: I0321 09:19:48.802771 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.262900 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 21 09:19:49 crc kubenswrapper[4932]: W0321 09:19:49.268888 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod250ff311_5acb_4def_9d7c_ead2d48f29bd.slice/crio-612c4c97fe9b37968dafd2694372c989d55e1f9c7d00d16b840cb262daaad1c1 WatchSource:0}: Error finding container 612c4c97fe9b37968dafd2694372c989d55e1f9c7d00d16b840cb262daaad1c1: Status 404 returned error can't find the container with id 612c4c97fe9b37968dafd2694372c989d55e1f9c7d00d16b840cb262daaad1c1 Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.369170 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"250ff311-5acb-4def-9d7c-ead2d48f29bd","Type":"ContainerStarted","Data":"612c4c97fe9b37968dafd2694372c989d55e1f9c7d00d16b840cb262daaad1c1"} Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.529985 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f975cdb74-g8tw2" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": dial tcp 10.217.0.182:9311: connect: connection refused" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.530012 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f975cdb74-g8tw2" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": dial tcp 10.217.0.182:9311: connect: connection refused" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.714103 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d86fb296-12f2-4f99-b57f-77504f6cda72" path="/var/lib/kubelet/pods/d86fb296-12f2-4f99-b57f-77504f6cda72/volumes" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.808705 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.907867 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-log-httpd\") pod \"02ab16ee-8108-498f-8450-bb82bf6ce347\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.908177 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-run-httpd\") pod \"02ab16ee-8108-498f-8450-bb82bf6ce347\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.908258 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-combined-ca-bundle\") pod \"02ab16ee-8108-498f-8450-bb82bf6ce347\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.908410 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-scripts\") pod \"02ab16ee-8108-498f-8450-bb82bf6ce347\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.908487 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-config-data\") pod \"02ab16ee-8108-498f-8450-bb82bf6ce347\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.908561 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xlcn\" (UniqueName: \"kubernetes.io/projected/02ab16ee-8108-498f-8450-bb82bf6ce347-kube-api-access-4xlcn\") pod \"02ab16ee-8108-498f-8450-bb82bf6ce347\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.908599 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-sg-core-conf-yaml\") pod \"02ab16ee-8108-498f-8450-bb82bf6ce347\" (UID: \"02ab16ee-8108-498f-8450-bb82bf6ce347\") " Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.909420 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "02ab16ee-8108-498f-8450-bb82bf6ce347" (UID: "02ab16ee-8108-498f-8450-bb82bf6ce347"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.909935 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "02ab16ee-8108-498f-8450-bb82bf6ce347" (UID: "02ab16ee-8108-498f-8450-bb82bf6ce347"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.914831 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-scripts" (OuterVolumeSpecName: "scripts") pod "02ab16ee-8108-498f-8450-bb82bf6ce347" (UID: "02ab16ee-8108-498f-8450-bb82bf6ce347"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.916516 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ab16ee-8108-498f-8450-bb82bf6ce347-kube-api-access-4xlcn" (OuterVolumeSpecName: "kube-api-access-4xlcn") pod "02ab16ee-8108-498f-8450-bb82bf6ce347" (UID: "02ab16ee-8108-498f-8450-bb82bf6ce347"). InnerVolumeSpecName "kube-api-access-4xlcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.941780 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "02ab16ee-8108-498f-8450-bb82bf6ce347" (UID: "02ab16ee-8108-498f-8450-bb82bf6ce347"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:49 crc kubenswrapper[4932]: I0321 09:19:49.994182 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02ab16ee-8108-498f-8450-bb82bf6ce347" (UID: "02ab16ee-8108-498f-8450-bb82bf6ce347"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.010480 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.010516 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.010528 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xlcn\" (UniqueName: \"kubernetes.io/projected/02ab16ee-8108-498f-8450-bb82bf6ce347-kube-api-access-4xlcn\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.010539 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.010548 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.010559 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02ab16ee-8108-498f-8450-bb82bf6ce347-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.019210 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-config-data" (OuterVolumeSpecName: "config-data") pod "02ab16ee-8108-498f-8450-bb82bf6ce347" (UID: "02ab16ee-8108-498f-8450-bb82bf6ce347"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.097516 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.107643 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba11bdb_a9d2_414d_b2df_3eaedd97df7e.slice/crio-42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b\": RecentStats: unable to find data in memory cache]" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.112241 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ab16ee-8108-498f-8450-bb82bf6ce347-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.213475 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twtch\" (UniqueName: \"kubernetes.io/projected/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-kube-api-access-twtch\") pod \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.213583 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-logs\") pod \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.214128 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-logs" (OuterVolumeSpecName: "logs") pod "70233835-5b9c-4b42-a3e1-07ccbccfeaf3" (UID: "70233835-5b9c-4b42-a3e1-07ccbccfeaf3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.214661 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data-custom\") pod \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.214817 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data\") pod \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.214977 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-combined-ca-bundle\") pod \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\" (UID: \"70233835-5b9c-4b42-a3e1-07ccbccfeaf3\") " Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.216108 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.218279 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "70233835-5b9c-4b42-a3e1-07ccbccfeaf3" (UID: "70233835-5b9c-4b42-a3e1-07ccbccfeaf3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.218507 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-kube-api-access-twtch" (OuterVolumeSpecName: "kube-api-access-twtch") pod "70233835-5b9c-4b42-a3e1-07ccbccfeaf3" (UID: "70233835-5b9c-4b42-a3e1-07ccbccfeaf3"). InnerVolumeSpecName "kube-api-access-twtch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.241563 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70233835-5b9c-4b42-a3e1-07ccbccfeaf3" (UID: "70233835-5b9c-4b42-a3e1-07ccbccfeaf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.265318 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data" (OuterVolumeSpecName: "config-data") pod "70233835-5b9c-4b42-a3e1-07ccbccfeaf3" (UID: "70233835-5b9c-4b42-a3e1-07ccbccfeaf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.318620 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.318648 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twtch\" (UniqueName: \"kubernetes.io/projected/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-kube-api-access-twtch\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.318663 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.318673 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70233835-5b9c-4b42-a3e1-07ccbccfeaf3-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.378942 4932 generic.go:334] "Generic (PLEG): container finished" podID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerID="a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8" exitCode=0 Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.379009 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f975cdb74-g8tw2" event={"ID":"70233835-5b9c-4b42-a3e1-07ccbccfeaf3","Type":"ContainerDied","Data":"a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8"} Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.379024 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f975cdb74-g8tw2" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.379050 4932 scope.go:117] "RemoveContainer" containerID="a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.379039 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f975cdb74-g8tw2" event={"ID":"70233835-5b9c-4b42-a3e1-07ccbccfeaf3","Type":"ContainerDied","Data":"f13a2ed6c0fb2ee7e4f1357e9ef564fbab0fba150a4fee7f80ddb2a799864b56"} Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.382147 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"250ff311-5acb-4def-9d7c-ead2d48f29bd","Type":"ContainerStarted","Data":"33d9751b1b84daf259a6a7401eda6ee6503a84b7b0386dffd3fea77460c8241c"} Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.386173 4932 generic.go:334] "Generic (PLEG): container finished" podID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerID="878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371" exitCode=0 Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.386228 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerDied","Data":"878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371"} Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.386259 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02ab16ee-8108-498f-8450-bb82bf6ce347","Type":"ContainerDied","Data":"2bbf327d00f6879add54437bc7908f92f7ba7ed7d2588e47714f525b0c3159f5"} Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.386287 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.414888 4932 scope.go:117] "RemoveContainer" containerID="7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.424045 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f975cdb74-g8tw2"] Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.438003 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6f975cdb74-g8tw2"] Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.456691 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.458707 4932 scope.go:117] "RemoveContainer" containerID="a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.459820 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8\": container with ID starting with a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8 not found: ID does not exist" containerID="a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.459863 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8"} err="failed to get container status \"a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8\": rpc error: code = NotFound desc = could not find container \"a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8\": container with ID starting with a2345696bf08db320387c8367d6ebd4890fac75c10bd01d56c7c2a8bf72933e8 not found: ID does not exist" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.459890 4932 scope.go:117] "RemoveContainer" containerID="7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.460173 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c\": container with ID starting with 7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c not found: ID does not exist" containerID="7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.460212 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c"} err="failed to get container status \"7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c\": rpc error: code = NotFound desc = could not find container \"7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c\": container with ID starting with 7805ed8dd30c0d4b7bc166afc1a7a7c200b14f65cb61d3ed9327c67c58f43e9c not found: ID does not exist" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.460242 4932 scope.go:117] "RemoveContainer" containerID="b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.477251 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.489696 4932 scope.go:117] "RemoveContainer" containerID="c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.491654 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.492145 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="ceilometer-central-agent" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492168 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="ceilometer-central-agent" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.492184 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="sg-core" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492203 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="sg-core" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.492221 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api-log" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492230 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api-log" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.492253 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="proxy-httpd" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492261 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="proxy-httpd" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.492288 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492297 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.492311 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="ceilometer-notification-agent" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492319 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="ceilometer-notification-agent" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492565 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492591 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="ceilometer-central-agent" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492601 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="ceilometer-notification-agent" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492620 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="sg-core" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492636 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" containerName="proxy-httpd" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.492649 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" containerName="barbican-api-log" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.494827 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.497577 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.499999 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.507993 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.523657 4932 scope.go:117] "RemoveContainer" containerID="878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.550196 4932 scope.go:117] "RemoveContainer" containerID="8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.569916 4932 scope.go:117] "RemoveContainer" containerID="b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.570422 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938\": container with ID starting with b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938 not found: ID does not exist" containerID="b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.570469 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938"} err="failed to get container status \"b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938\": rpc error: code = NotFound desc = could not find container \"b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938\": container with ID starting with b02cbeb68691033d048a2895527b2a25e143b2c0712d127aeeab91ac5b671938 not found: ID does not exist" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.570500 4932 scope.go:117] "RemoveContainer" containerID="c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.570886 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02\": container with ID starting with c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02 not found: ID does not exist" containerID="c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.570921 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02"} err="failed to get container status \"c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02\": rpc error: code = NotFound desc = could not find container \"c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02\": container with ID starting with c39b6648cb5d2cde26fae4e7ba9a4fa40e0635d2544c720fb1a03a26f0ca2e02 not found: ID does not exist" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.570949 4932 scope.go:117] "RemoveContainer" containerID="878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.571290 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371\": container with ID starting with 878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371 not found: ID does not exist" containerID="878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.571341 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371"} err="failed to get container status \"878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371\": rpc error: code = NotFound desc = could not find container \"878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371\": container with ID starting with 878b4c7f58edcefd98372859057e70efa7fd6cdfeafc06ec68bcadcf04210371 not found: ID does not exist" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.571443 4932 scope.go:117] "RemoveContainer" containerID="8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552" Mar 21 09:19:50 crc kubenswrapper[4932]: E0321 09:19:50.571761 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552\": container with ID starting with 8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552 not found: ID does not exist" containerID="8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.571785 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552"} err="failed to get container status \"8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552\": rpc error: code = NotFound desc = could not find container \"8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552\": container with ID starting with 8b345ac45301ee472b6b3e0e0c498bd51459c07eb9e4ce957311e398af9ef552 not found: ID does not exist" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.626699 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.626839 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjg6l\" (UniqueName: \"kubernetes.io/projected/f4f6e1b0-2895-4f97-848f-80d39a53745f-kube-api-access-vjg6l\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.626961 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-config-data\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.626978 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-run-httpd\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.626999 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-log-httpd\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.627013 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.627028 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-scripts\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.728801 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-config-data\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.729299 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-run-httpd\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.729332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-log-httpd\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.729373 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.729397 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-scripts\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.729436 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.729516 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjg6l\" (UniqueName: \"kubernetes.io/projected/f4f6e1b0-2895-4f97-848f-80d39a53745f-kube-api-access-vjg6l\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.729920 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-run-httpd\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.729927 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-log-httpd\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.736178 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-scripts\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.736762 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-config-data\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.737038 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.738959 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.753105 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjg6l\" (UniqueName: \"kubernetes.io/projected/f4f6e1b0-2895-4f97-848f-80d39a53745f-kube-api-access-vjg6l\") pod \"ceilometer-0\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.817690 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.859561 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.859625 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 21 09:19:50 crc kubenswrapper[4932]: I0321 09:19:50.860373 4932 scope.go:117] "RemoveContainer" containerID="827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e" Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.293618 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:19:51 crc kubenswrapper[4932]: W0321 09:19:51.301284 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4f6e1b0_2895_4f97_848f_80d39a53745f.slice/crio-f63927c28ee0faa981ea6ee3b347af03006284ae5da2f6327d7f90587ceb3cda WatchSource:0}: Error finding container f63927c28ee0faa981ea6ee3b347af03006284ae5da2f6327d7f90587ceb3cda: Status 404 returned error can't find the container with id f63927c28ee0faa981ea6ee3b347af03006284ae5da2f6327d7f90587ceb3cda Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.397640 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerStarted","Data":"f63927c28ee0faa981ea6ee3b347af03006284ae5da2f6327d7f90587ceb3cda"} Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.402581 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerStarted","Data":"5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b"} Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.406640 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"250ff311-5acb-4def-9d7c-ead2d48f29bd","Type":"ContainerStarted","Data":"bde8862767ab893e80feb21c221640cd46b32c02a938b26caddb8078fe3c715d"} Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.406757 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.464089 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.464063349 podStartE2EDuration="3.464063349s" podCreationTimestamp="2026-03-21 09:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:51.452531823 +0000 UTC m=+1295.047730092" watchObservedRunningTime="2026-03-21 09:19:51.464063349 +0000 UTC m=+1295.059261628" Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.751634 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ab16ee-8108-498f-8450-bb82bf6ce347" path="/var/lib/kubelet/pods/02ab16ee-8108-498f-8450-bb82bf6ce347/volumes" Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.752839 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70233835-5b9c-4b42-a3e1-07ccbccfeaf3" path="/var/lib/kubelet/pods/70233835-5b9c-4b42-a3e1-07ccbccfeaf3/volumes" Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.753962 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.817466 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.831149 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98588b9bc-lswzd"] Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.831438 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" podUID="4181731c-7e65-491c-8bf2-7e7042ad14e3" containerName="dnsmasq-dns" containerID="cri-o://58c49e2d8620ae87f0dc2cdec7d3fe78c467c74bf39c7187b86fb9c6d5d68fe9" gracePeriod=10 Mar 21 09:19:51 crc kubenswrapper[4932]: I0321 09:19:51.895854 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.419096 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerStarted","Data":"756b423f6e2293ab4bed4703b8cf8e96f0700bffafc752fdd453ebc3e7a22e13"} Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.419626 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerStarted","Data":"a16fb5a511c142c5c67f984bbdb9c6bd0ba65e591d24fcecbf0a1f4d1b69aee6"} Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.429948 4932 generic.go:334] "Generic (PLEG): container finished" podID="4181731c-7e65-491c-8bf2-7e7042ad14e3" containerID="58c49e2d8620ae87f0dc2cdec7d3fe78c467c74bf39c7187b86fb9c6d5d68fe9" exitCode=0 Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.431188 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" event={"ID":"4181731c-7e65-491c-8bf2-7e7042ad14e3","Type":"ContainerDied","Data":"58c49e2d8620ae87f0dc2cdec7d3fe78c467c74bf39c7187b86fb9c6d5d68fe9"} Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.431241 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" event={"ID":"4181731c-7e65-491c-8bf2-7e7042ad14e3","Type":"ContainerDied","Data":"35518d0281384e4ce70548c033aa81e4801fba9315cd74f83ce192b6baa501e0"} Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.431255 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35518d0281384e4ce70548c033aa81e4801fba9315cd74f83ce192b6baa501e0" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.431441 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eddafca3-d459-4525-9f44-5e09410a725e" containerName="cinder-scheduler" containerID="cri-o://3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c" gracePeriod=30 Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.432960 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eddafca3-d459-4525-9f44-5e09410a725e" containerName="probe" containerID="cri-o://41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15" gracePeriod=30 Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.475227 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.582269 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-sb\") pod \"4181731c-7e65-491c-8bf2-7e7042ad14e3\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.582330 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-swift-storage-0\") pod \"4181731c-7e65-491c-8bf2-7e7042ad14e3\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.582383 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-nb\") pod \"4181731c-7e65-491c-8bf2-7e7042ad14e3\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.582450 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-config\") pod \"4181731c-7e65-491c-8bf2-7e7042ad14e3\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.582473 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-svc\") pod \"4181731c-7e65-491c-8bf2-7e7042ad14e3\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.582493 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrnfw\" (UniqueName: \"kubernetes.io/projected/4181731c-7e65-491c-8bf2-7e7042ad14e3-kube-api-access-vrnfw\") pod \"4181731c-7e65-491c-8bf2-7e7042ad14e3\" (UID: \"4181731c-7e65-491c-8bf2-7e7042ad14e3\") " Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.591049 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4181731c-7e65-491c-8bf2-7e7042ad14e3-kube-api-access-vrnfw" (OuterVolumeSpecName: "kube-api-access-vrnfw") pod "4181731c-7e65-491c-8bf2-7e7042ad14e3" (UID: "4181731c-7e65-491c-8bf2-7e7042ad14e3"). InnerVolumeSpecName "kube-api-access-vrnfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.649884 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4181731c-7e65-491c-8bf2-7e7042ad14e3" (UID: "4181731c-7e65-491c-8bf2-7e7042ad14e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.651897 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4181731c-7e65-491c-8bf2-7e7042ad14e3" (UID: "4181731c-7e65-491c-8bf2-7e7042ad14e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.661520 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-config" (OuterVolumeSpecName: "config") pod "4181731c-7e65-491c-8bf2-7e7042ad14e3" (UID: "4181731c-7e65-491c-8bf2-7e7042ad14e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.661757 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4181731c-7e65-491c-8bf2-7e7042ad14e3" (UID: "4181731c-7e65-491c-8bf2-7e7042ad14e3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.667942 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4181731c-7e65-491c-8bf2-7e7042ad14e3" (UID: "4181731c-7e65-491c-8bf2-7e7042ad14e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.685807 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.685855 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.685866 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.685877 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrnfw\" (UniqueName: \"kubernetes.io/projected/4181731c-7e65-491c-8bf2-7e7042ad14e3-kube-api-access-vrnfw\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.685889 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:52 crc kubenswrapper[4932]: I0321 09:19:52.685898 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4181731c-7e65-491c-8bf2-7e7042ad14e3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:53 crc kubenswrapper[4932]: I0321 09:19:53.440107 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-658f888668-v6842" Mar 21 09:19:53 crc kubenswrapper[4932]: I0321 09:19:53.442161 4932 generic.go:334] "Generic (PLEG): container finished" podID="eddafca3-d459-4525-9f44-5e09410a725e" containerID="41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15" exitCode=0 Mar 21 09:19:53 crc kubenswrapper[4932]: I0321 09:19:53.442250 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eddafca3-d459-4525-9f44-5e09410a725e","Type":"ContainerDied","Data":"41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15"} Mar 21 09:19:53 crc kubenswrapper[4932]: I0321 09:19:53.445835 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98588b9bc-lswzd" Mar 21 09:19:53 crc kubenswrapper[4932]: I0321 09:19:53.449046 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerStarted","Data":"b6642bea573633cc66bab71232635859039f162c89d246e9fa1e165feea2033d"} Mar 21 09:19:53 crc kubenswrapper[4932]: I0321 09:19:53.487099 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98588b9bc-lswzd"] Mar 21 09:19:53 crc kubenswrapper[4932]: I0321 09:19:53.499002 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98588b9bc-lswzd"] Mar 21 09:19:53 crc kubenswrapper[4932]: I0321 09:19:53.719571 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4181731c-7e65-491c-8bf2-7e7042ad14e3" path="/var/lib/kubelet/pods/4181731c-7e65-491c-8bf2-7e7042ad14e3/volumes" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.170127 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.318225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data-custom\") pod \"eddafca3-d459-4525-9f44-5e09410a725e\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.318307 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcj95\" (UniqueName: \"kubernetes.io/projected/eddafca3-d459-4525-9f44-5e09410a725e-kube-api-access-kcj95\") pod \"eddafca3-d459-4525-9f44-5e09410a725e\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.319434 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-scripts\") pod \"eddafca3-d459-4525-9f44-5e09410a725e\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.319494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eddafca3-d459-4525-9f44-5e09410a725e-etc-machine-id\") pod \"eddafca3-d459-4525-9f44-5e09410a725e\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.319572 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data\") pod \"eddafca3-d459-4525-9f44-5e09410a725e\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.319616 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-combined-ca-bundle\") pod \"eddafca3-d459-4525-9f44-5e09410a725e\" (UID: \"eddafca3-d459-4525-9f44-5e09410a725e\") " Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.319842 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eddafca3-d459-4525-9f44-5e09410a725e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eddafca3-d459-4525-9f44-5e09410a725e" (UID: "eddafca3-d459-4525-9f44-5e09410a725e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.320672 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eddafca3-d459-4525-9f44-5e09410a725e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.327590 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-scripts" (OuterVolumeSpecName: "scripts") pod "eddafca3-d459-4525-9f44-5e09410a725e" (UID: "eddafca3-d459-4525-9f44-5e09410a725e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.339126 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eddafca3-d459-4525-9f44-5e09410a725e" (UID: "eddafca3-d459-4525-9f44-5e09410a725e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.339143 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddafca3-d459-4525-9f44-5e09410a725e-kube-api-access-kcj95" (OuterVolumeSpecName: "kube-api-access-kcj95") pod "eddafca3-d459-4525-9f44-5e09410a725e" (UID: "eddafca3-d459-4525-9f44-5e09410a725e"). InnerVolumeSpecName "kube-api-access-kcj95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.389828 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eddafca3-d459-4525-9f44-5e09410a725e" (UID: "eddafca3-d459-4525-9f44-5e09410a725e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.424387 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.424429 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcj95\" (UniqueName: \"kubernetes.io/projected/eddafca3-d459-4525-9f44-5e09410a725e-kube-api-access-kcj95\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.424445 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.424456 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.464389 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data" (OuterVolumeSpecName: "config-data") pod "eddafca3-d459-4525-9f44-5e09410a725e" (UID: "eddafca3-d459-4525-9f44-5e09410a725e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.467262 4932 generic.go:334] "Generic (PLEG): container finished" podID="eddafca3-d459-4525-9f44-5e09410a725e" containerID="3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c" exitCode=0 Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.467324 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eddafca3-d459-4525-9f44-5e09410a725e","Type":"ContainerDied","Data":"3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c"} Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.467380 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eddafca3-d459-4525-9f44-5e09410a725e","Type":"ContainerDied","Data":"f8d21366849bd2e9da300d67cdd5931230824204cf8c579fa115d7f57985f699"} Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.467395 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.467404 4932 scope.go:117] "RemoveContainer" containerID="41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.519493 4932 scope.go:117] "RemoveContainer" containerID="3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.520060 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.535788 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.538993 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eddafca3-d459-4525-9f44-5e09410a725e-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.558709 4932 scope.go:117] "RemoveContainer" containerID="41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15" Mar 21 09:19:54 crc kubenswrapper[4932]: E0321 09:19:54.560654 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15\": container with ID starting with 41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15 not found: ID does not exist" containerID="41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.560755 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15"} err="failed to get container status \"41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15\": rpc error: code = NotFound desc = could not find container \"41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15\": container with ID starting with 41ffc2788a6ec682246f1f93fea9f1c44d1f73af739a3ef17a546a61fed98f15 not found: ID does not exist" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.560794 4932 scope.go:117] "RemoveContainer" containerID="3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c" Mar 21 09:19:54 crc kubenswrapper[4932]: E0321 09:19:54.561205 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c\": container with ID starting with 3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c not found: ID does not exist" containerID="3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.561268 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c"} err="failed to get container status \"3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c\": rpc error: code = NotFound desc = could not find container \"3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c\": container with ID starting with 3d4895d68d74830b9180ea369b229278fd352feab31da7508c701fa78ac19c7c not found: ID does not exist" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.596528 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:54 crc kubenswrapper[4932]: E0321 09:19:54.597669 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4181731c-7e65-491c-8bf2-7e7042ad14e3" containerName="dnsmasq-dns" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.597699 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4181731c-7e65-491c-8bf2-7e7042ad14e3" containerName="dnsmasq-dns" Mar 21 09:19:54 crc kubenswrapper[4932]: E0321 09:19:54.597735 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4181731c-7e65-491c-8bf2-7e7042ad14e3" containerName="init" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.597743 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4181731c-7e65-491c-8bf2-7e7042ad14e3" containerName="init" Mar 21 09:19:54 crc kubenswrapper[4932]: E0321 09:19:54.597791 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eddafca3-d459-4525-9f44-5e09410a725e" containerName="probe" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.597802 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddafca3-d459-4525-9f44-5e09410a725e" containerName="probe" Mar 21 09:19:54 crc kubenswrapper[4932]: E0321 09:19:54.597821 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eddafca3-d459-4525-9f44-5e09410a725e" containerName="cinder-scheduler" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.597829 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddafca3-d459-4525-9f44-5e09410a725e" containerName="cinder-scheduler" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.598341 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4181731c-7e65-491c-8bf2-7e7042ad14e3" containerName="dnsmasq-dns" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.598391 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="eddafca3-d459-4525-9f44-5e09410a725e" containerName="cinder-scheduler" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.598404 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="eddafca3-d459-4525-9f44-5e09410a725e" containerName="probe" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.600903 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.603146 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.633142 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.742454 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cae8d2b-2318-4ec3-a291-10530a2532d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.742552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.742579 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.742711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xsb\" (UniqueName: \"kubernetes.io/projected/2cae8d2b-2318-4ec3-a291-10530a2532d5-kube-api-access-g6xsb\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.742731 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.742776 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.844638 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.844693 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.846461 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xsb\" (UniqueName: \"kubernetes.io/projected/2cae8d2b-2318-4ec3-a291-10530a2532d5-kube-api-access-g6xsb\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.846820 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.846939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.847066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cae8d2b-2318-4ec3-a291-10530a2532d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.848584 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.853419 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cae8d2b-2318-4ec3-a291-10530a2532d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.853833 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.856583 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.856979 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cae8d2b-2318-4ec3-a291-10530a2532d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.864964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xsb\" (UniqueName: \"kubernetes.io/projected/2cae8d2b-2318-4ec3-a291-10530a2532d5-kube-api-access-g6xsb\") pod \"cinder-scheduler-0\" (UID: \"2cae8d2b-2318-4ec3-a291-10530a2532d5\") " pod="openstack/cinder-scheduler-0" Mar 21 09:19:54 crc kubenswrapper[4932]: I0321 09:19:54.942514 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 21 09:19:55 crc kubenswrapper[4932]: W0321 09:19:55.439048 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cae8d2b_2318_4ec3_a291_10530a2532d5.slice/crio-b7b20a5380dc4f2291710f58fa4dd8c18aff51a3cffa043aa2a6d466218b8b35 WatchSource:0}: Error finding container b7b20a5380dc4f2291710f58fa4dd8c18aff51a3cffa043aa2a6d466218b8b35: Status 404 returned error can't find the container with id b7b20a5380dc4f2291710f58fa4dd8c18aff51a3cffa043aa2a6d466218b8b35 Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.439621 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.486405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerStarted","Data":"67948b886eb6d8fec876595ae737cc6f68ef120480d2c3f42ee600dd19450f88"} Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.486924 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.489834 4932 generic.go:334] "Generic (PLEG): container finished" podID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerID="5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b" exitCode=1 Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.489864 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerDied","Data":"5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b"} Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.489939 4932 scope.go:117] "RemoveContainer" containerID="827a60ca689a264e4437b5632bfe0ca8e3cd8220f6ec9c9c51a6cfb61c4f942e" Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.490755 4932 scope.go:117] "RemoveContainer" containerID="5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b" Mar 21 09:19:55 crc kubenswrapper[4932]: E0321 09:19:55.491087 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7d4c5dc9-4c73-486d-9427-2a5f07da9e89)\"" pod="openstack/watcher-decision-engine-0" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.494137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2cae8d2b-2318-4ec3-a291-10530a2532d5","Type":"ContainerStarted","Data":"b7b20a5380dc4f2291710f58fa4dd8c18aff51a3cffa043aa2a6d466218b8b35"} Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.530249 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.172700896 podStartE2EDuration="5.530223016s" podCreationTimestamp="2026-03-21 09:19:50 +0000 UTC" firstStartedPulling="2026-03-21 09:19:51.304316646 +0000 UTC m=+1294.899514945" lastFinishedPulling="2026-03-21 09:19:54.661838796 +0000 UTC m=+1298.257037065" observedRunningTime="2026-03-21 09:19:55.523591376 +0000 UTC m=+1299.118789645" watchObservedRunningTime="2026-03-21 09:19:55.530223016 +0000 UTC m=+1299.125421285" Mar 21 09:19:55 crc kubenswrapper[4932]: I0321 09:19:55.717634 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eddafca3-d459-4525-9f44-5e09410a725e" path="/var/lib/kubelet/pods/eddafca3-d459-4525-9f44-5e09410a725e/volumes" Mar 21 09:19:56 crc kubenswrapper[4932]: I0321 09:19:56.515624 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2cae8d2b-2318-4ec3-a291-10530a2532d5","Type":"ContainerStarted","Data":"adca77895727423eaec08310b611eecd83fd5391129d705f288b193e699cbee9"} Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.362369 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.363696 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.366082 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.366336 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.366665 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-98d4g" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.379670 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.507610 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3e2380f-3fa6-4322-b9e2-befe6a37c754-openstack-config\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.507849 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3e2380f-3fa6-4322-b9e2-befe6a37c754-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.508061 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e2380f-3fa6-4322-b9e2-befe6a37c754-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.508126 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7km5\" (UniqueName: \"kubernetes.io/projected/c3e2380f-3fa6-4322-b9e2-befe6a37c754-kube-api-access-c7km5\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.528934 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2cae8d2b-2318-4ec3-a291-10530a2532d5","Type":"ContainerStarted","Data":"ea767c7bcc691d871afc7274531f6aabd863d38d60ddd99d1b431ee3463ac152"} Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.554974 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.554949652 podStartE2EDuration="3.554949652s" podCreationTimestamp="2026-03-21 09:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:19:57.550018207 +0000 UTC m=+1301.145216496" watchObservedRunningTime="2026-03-21 09:19:57.554949652 +0000 UTC m=+1301.150147921" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.610263 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3e2380f-3fa6-4322-b9e2-befe6a37c754-openstack-config\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.610408 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3e2380f-3fa6-4322-b9e2-befe6a37c754-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.610491 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e2380f-3fa6-4322-b9e2-befe6a37c754-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.610517 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7km5\" (UniqueName: \"kubernetes.io/projected/c3e2380f-3fa6-4322-b9e2-befe6a37c754-kube-api-access-c7km5\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.611565 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3e2380f-3fa6-4322-b9e2-befe6a37c754-openstack-config\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.618008 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e2380f-3fa6-4322-b9e2-befe6a37c754-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.619329 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3e2380f-3fa6-4322-b9e2-befe6a37c754-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.630526 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7km5\" (UniqueName: \"kubernetes.io/projected/c3e2380f-3fa6-4322-b9e2-befe6a37c754-kube-api-access-c7km5\") pod \"openstackclient\" (UID: \"c3e2380f-3fa6-4322-b9e2-befe6a37c754\") " pod="openstack/openstackclient" Mar 21 09:19:57 crc kubenswrapper[4932]: I0321 09:19:57.690652 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 21 09:19:58 crc kubenswrapper[4932]: I0321 09:19:58.218243 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 21 09:19:58 crc kubenswrapper[4932]: I0321 09:19:58.546508 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c3e2380f-3fa6-4322-b9e2-befe6a37c754","Type":"ContainerStarted","Data":"8f6f42d23286e36a4736401cee5c8315efb28e36dbaf8ca57a6d8fca6aaeda71"} Mar 21 09:19:59 crc kubenswrapper[4932]: I0321 09:19:59.943422 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.138409 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568080-4t8ff"] Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.140073 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568080-4t8ff" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.148858 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.149060 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.149171 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.154529 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568080-4t8ff"] Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.262244 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpkmn\" (UniqueName: \"kubernetes.io/projected/24f72379-7d9f-4c04-b560-ba0495427abd-kube-api-access-lpkmn\") pod \"auto-csr-approver-29568080-4t8ff\" (UID: \"24f72379-7d9f-4c04-b560-ba0495427abd\") " pod="openshift-infra/auto-csr-approver-29568080-4t8ff" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.364602 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpkmn\" (UniqueName: \"kubernetes.io/projected/24f72379-7d9f-4c04-b560-ba0495427abd-kube-api-access-lpkmn\") pod \"auto-csr-approver-29568080-4t8ff\" (UID: \"24f72379-7d9f-4c04-b560-ba0495427abd\") " pod="openshift-infra/auto-csr-approver-29568080-4t8ff" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.386542 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpkmn\" (UniqueName: \"kubernetes.io/projected/24f72379-7d9f-4c04-b560-ba0495427abd-kube-api-access-lpkmn\") pod \"auto-csr-approver-29568080-4t8ff\" (UID: \"24f72379-7d9f-4c04-b560-ba0495427abd\") " pod="openshift-infra/auto-csr-approver-29568080-4t8ff" Mar 21 09:20:00 crc kubenswrapper[4932]: E0321 09:20:00.399937 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba11bdb_a9d2_414d_b2df_3eaedd97df7e.slice/crio-42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b\": RecentStats: unable to find data in memory cache]" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.471266 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568080-4t8ff" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.644008 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.768762 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bc4cc655d-wmdr9" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.862570 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.862939 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:00 crc kubenswrapper[4932]: I0321 09:20:00.863735 4932 scope.go:117] "RemoveContainer" containerID="5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b" Mar 21 09:20:00 crc kubenswrapper[4932]: E0321 09:20:00.864034 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7d4c5dc9-4c73-486d-9427-2a5f07da9e89)\"" pod="openstack/watcher-decision-engine-0" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" Mar 21 09:20:01 crc kubenswrapper[4932]: W0321 09:20:01.187641 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24f72379_7d9f_4c04_b560_ba0495427abd.slice/crio-79792a149cbf42c2d6020683f3881a1d361b33f5f87032d3e4d1f6a9f843da96 WatchSource:0}: Error finding container 79792a149cbf42c2d6020683f3881a1d361b33f5f87032d3e4d1f6a9f843da96: Status 404 returned error can't find the container with id 79792a149cbf42c2d6020683f3881a1d361b33f5f87032d3e4d1f6a9f843da96 Mar 21 09:20:01 crc kubenswrapper[4932]: I0321 09:20:01.194716 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568080-4t8ff"] Mar 21 09:20:01 crc kubenswrapper[4932]: I0321 09:20:01.593752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568080-4t8ff" event={"ID":"24f72379-7d9f-4c04-b560-ba0495427abd","Type":"ContainerStarted","Data":"79792a149cbf42c2d6020683f3881a1d361b33f5f87032d3e4d1f6a9f843da96"} Mar 21 09:20:01 crc kubenswrapper[4932]: I0321 09:20:01.725114 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.620543 4932 generic.go:334] "Generic (PLEG): container finished" podID="24f72379-7d9f-4c04-b560-ba0495427abd" containerID="53d94f95343e4797016ba4b73087ef254817b72711e916d867684b782edd6634" exitCode=0 Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.621069 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568080-4t8ff" event={"ID":"24f72379-7d9f-4c04-b560-ba0495427abd","Type":"ContainerDied","Data":"53d94f95343e4797016ba4b73087ef254817b72711e916d867684b782edd6634"} Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.703138 4932 scope.go:117] "RemoveContainer" containerID="3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c" Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.703199 4932 scope.go:117] "RemoveContainer" containerID="8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58" Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.947573 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-55b6df9899-kzdvb"] Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.949321 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.952168 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.952444 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.952592 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 21 09:20:03 crc kubenswrapper[4932]: I0321 09:20:03.966476 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55b6df9899-kzdvb"] Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.055762 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13cb95e4-69d8-4acf-9b49-8da6aed86089-log-httpd\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.056028 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/13cb95e4-69d8-4acf-9b49-8da6aed86089-etc-swift\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.056076 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-internal-tls-certs\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.056137 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dzmg\" (UniqueName: \"kubernetes.io/projected/13cb95e4-69d8-4acf-9b49-8da6aed86089-kube-api-access-6dzmg\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.056210 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-config-data\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.056574 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-combined-ca-bundle\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.056694 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-public-tls-certs\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.056800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13cb95e4-69d8-4acf-9b49-8da6aed86089-run-httpd\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.158474 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13cb95e4-69d8-4acf-9b49-8da6aed86089-log-httpd\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.158558 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/13cb95e4-69d8-4acf-9b49-8da6aed86089-etc-swift\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.158591 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-internal-tls-certs\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.158664 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dzmg\" (UniqueName: \"kubernetes.io/projected/13cb95e4-69d8-4acf-9b49-8da6aed86089-kube-api-access-6dzmg\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.158718 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-config-data\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.158741 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-combined-ca-bundle\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.158822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-public-tls-certs\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.158904 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13cb95e4-69d8-4acf-9b49-8da6aed86089-run-httpd\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.159421 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13cb95e4-69d8-4acf-9b49-8da6aed86089-run-httpd\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.160058 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/13cb95e4-69d8-4acf-9b49-8da6aed86089-log-httpd\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.165638 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-combined-ca-bundle\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.166579 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-public-tls-certs\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.167496 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/13cb95e4-69d8-4acf-9b49-8da6aed86089-etc-swift\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.168843 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-internal-tls-certs\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.174199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13cb95e4-69d8-4acf-9b49-8da6aed86089-config-data\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.180712 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dzmg\" (UniqueName: \"kubernetes.io/projected/13cb95e4-69d8-4acf-9b49-8da6aed86089-kube-api-access-6dzmg\") pod \"swift-proxy-55b6df9899-kzdvb\" (UID: \"13cb95e4-69d8-4acf-9b49-8da6aed86089\") " pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.271205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.454040 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.454333 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="ceilometer-central-agent" containerID="cri-o://a16fb5a511c142c5c67f984bbdb9c6bd0ba65e591d24fcecbf0a1f4d1b69aee6" gracePeriod=30 Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.454475 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="ceilometer-notification-agent" containerID="cri-o://756b423f6e2293ab4bed4703b8cf8e96f0700bffafc752fdd453ebc3e7a22e13" gracePeriod=30 Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.454526 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="proxy-httpd" containerID="cri-o://67948b886eb6d8fec876595ae737cc6f68ef120480d2c3f42ee600dd19450f88" gracePeriod=30 Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.454560 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="sg-core" containerID="cri-o://b6642bea573633cc66bab71232635859039f162c89d246e9fa1e165feea2033d" gracePeriod=30 Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.465083 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.189:3000/\": EOF" Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.633616 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerID="b6642bea573633cc66bab71232635859039f162c89d246e9fa1e165feea2033d" exitCode=2 Mar 21 09:20:04 crc kubenswrapper[4932]: I0321 09:20:04.633697 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerDied","Data":"b6642bea573633cc66bab71232635859039f162c89d246e9fa1e165feea2033d"} Mar 21 09:20:05 crc kubenswrapper[4932]: I0321 09:20:05.104152 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 21 09:20:05 crc kubenswrapper[4932]: I0321 09:20:05.648204 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerID="67948b886eb6d8fec876595ae737cc6f68ef120480d2c3f42ee600dd19450f88" exitCode=0 Mar 21 09:20:05 crc kubenswrapper[4932]: I0321 09:20:05.648477 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerID="a16fb5a511c142c5c67f984bbdb9c6bd0ba65e591d24fcecbf0a1f4d1b69aee6" exitCode=0 Mar 21 09:20:05 crc kubenswrapper[4932]: I0321 09:20:05.648275 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerDied","Data":"67948b886eb6d8fec876595ae737cc6f68ef120480d2c3f42ee600dd19450f88"} Mar 21 09:20:05 crc kubenswrapper[4932]: I0321 09:20:05.648511 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerDied","Data":"a16fb5a511c142c5c67f984bbdb9c6bd0ba65e591d24fcecbf0a1f4d1b69aee6"} Mar 21 09:20:06 crc kubenswrapper[4932]: I0321 09:20:06.659907 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerID="756b423f6e2293ab4bed4703b8cf8e96f0700bffafc752fdd453ebc3e7a22e13" exitCode=0 Mar 21 09:20:06 crc kubenswrapper[4932]: I0321 09:20:06.659955 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerDied","Data":"756b423f6e2293ab4bed4703b8cf8e96f0700bffafc752fdd453ebc3e7a22e13"} Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.582156 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568080-4t8ff" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.608611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpkmn\" (UniqueName: \"kubernetes.io/projected/24f72379-7d9f-4c04-b560-ba0495427abd-kube-api-access-lpkmn\") pod \"24f72379-7d9f-4c04-b560-ba0495427abd\" (UID: \"24f72379-7d9f-4c04-b560-ba0495427abd\") " Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.621391 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f72379-7d9f-4c04-b560-ba0495427abd-kube-api-access-lpkmn" (OuterVolumeSpecName: "kube-api-access-lpkmn") pod "24f72379-7d9f-4c04-b560-ba0495427abd" (UID: "24f72379-7d9f-4c04-b560-ba0495427abd"). InnerVolumeSpecName "kube-api-access-lpkmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.725112 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpkmn\" (UniqueName: \"kubernetes.io/projected/24f72379-7d9f-4c04-b560-ba0495427abd-kube-api-access-lpkmn\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.733058 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568080-4t8ff" event={"ID":"24f72379-7d9f-4c04-b560-ba0495427abd","Type":"ContainerDied","Data":"79792a149cbf42c2d6020683f3881a1d361b33f5f87032d3e4d1f6a9f843da96"} Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.733124 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79792a149cbf42c2d6020683f3881a1d361b33f5f87032d3e4d1f6a9f843da96" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.733229 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568080-4t8ff" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.801767 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.827774 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-run-httpd\") pod \"f4f6e1b0-2895-4f97-848f-80d39a53745f\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.827934 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-config-data\") pod \"f4f6e1b0-2895-4f97-848f-80d39a53745f\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.828041 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-log-httpd\") pod \"f4f6e1b0-2895-4f97-848f-80d39a53745f\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.828172 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjg6l\" (UniqueName: \"kubernetes.io/projected/f4f6e1b0-2895-4f97-848f-80d39a53745f-kube-api-access-vjg6l\") pod \"f4f6e1b0-2895-4f97-848f-80d39a53745f\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.828208 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-combined-ca-bundle\") pod \"f4f6e1b0-2895-4f97-848f-80d39a53745f\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.828274 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-sg-core-conf-yaml\") pod \"f4f6e1b0-2895-4f97-848f-80d39a53745f\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.828305 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-scripts\") pod \"f4f6e1b0-2895-4f97-848f-80d39a53745f\" (UID: \"f4f6e1b0-2895-4f97-848f-80d39a53745f\") " Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.834275 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f4f6e1b0-2895-4f97-848f-80d39a53745f" (UID: "f4f6e1b0-2895-4f97-848f-80d39a53745f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.834551 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f4f6e1b0-2895-4f97-848f-80d39a53745f" (UID: "f4f6e1b0-2895-4f97-848f-80d39a53745f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.837520 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-scripts" (OuterVolumeSpecName: "scripts") pod "f4f6e1b0-2895-4f97-848f-80d39a53745f" (UID: "f4f6e1b0-2895-4f97-848f-80d39a53745f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.844379 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f6e1b0-2895-4f97-848f-80d39a53745f-kube-api-access-vjg6l" (OuterVolumeSpecName: "kube-api-access-vjg6l") pod "f4f6e1b0-2895-4f97-848f-80d39a53745f" (UID: "f4f6e1b0-2895-4f97-848f-80d39a53745f"). InnerVolumeSpecName "kube-api-access-vjg6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.864112 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f4f6e1b0-2895-4f97-848f-80d39a53745f" (UID: "f4f6e1b0-2895-4f97-848f-80d39a53745f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:10 crc kubenswrapper[4932]: E0321 09:20:10.874125 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba11bdb_a9d2_414d_b2df_3eaedd97df7e.slice/crio-42ed2f0a53a5b90d45e1f2e4ce56888e345eb42d22c4978b701f9eaf9cdfc37b\": RecentStats: unable to find data in memory cache]" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.931068 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjg6l\" (UniqueName: \"kubernetes.io/projected/f4f6e1b0-2895-4f97-848f-80d39a53745f-kube-api-access-vjg6l\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.931370 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.931380 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.931389 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.931398 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4f6e1b0-2895-4f97-848f-80d39a53745f-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.955644 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-config-data" (OuterVolumeSpecName: "config-data") pod "f4f6e1b0-2895-4f97-848f-80d39a53745f" (UID: "f4f6e1b0-2895-4f97-848f-80d39a53745f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:10 crc kubenswrapper[4932]: I0321 09:20:10.963806 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4f6e1b0-2895-4f97-848f-80d39a53745f" (UID: "f4f6e1b0-2895-4f97-848f-80d39a53745f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.033055 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.033091 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f6e1b0-2895-4f97-848f-80d39a53745f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.164176 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55b6df9899-kzdvb"] Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.677296 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568074-9rw7k"] Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.690208 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568074-9rw7k"] Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.723527 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca09d16-95ed-48ef-b108-ed8a0e8c6477" path="/var/lib/kubelet/pods/7ca09d16-95ed-48ef-b108-ed8a0e8c6477/volumes" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.750307 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"5d94ad5b1a7c95d5cf3307c44274ad099eb4f2d593f6cb5b846c4cb6707fc081"} Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.771521 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4f6e1b0-2895-4f97-848f-80d39a53745f","Type":"ContainerDied","Data":"f63927c28ee0faa981ea6ee3b347af03006284ae5da2f6327d7f90587ceb3cda"} Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.771605 4932 scope.go:117] "RemoveContainer" containerID="67948b886eb6d8fec876595ae737cc6f68ef120480d2c3f42ee600dd19450f88" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.771857 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.787100 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55b6df9899-kzdvb" event={"ID":"13cb95e4-69d8-4acf-9b49-8da6aed86089","Type":"ContainerStarted","Data":"d27f6825e79e6fc2db6bbb656a0e35c7ab64430ac49086d6071c59829aee1036"} Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.787144 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55b6df9899-kzdvb" event={"ID":"13cb95e4-69d8-4acf-9b49-8da6aed86089","Type":"ContainerStarted","Data":"a9c71a4535db61f0b8bd641161ebb53fbeb618ec3975387f207c4dfd449fe2df"} Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.803876 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c3e2380f-3fa6-4322-b9e2-befe6a37c754","Type":"ContainerStarted","Data":"4a888deaaaf3d914a51498f2f64280a8a7daec120068e5154c8e79417ec56b2a"} Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.812330 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"0c5a762b3cf40820ecdcf5d64c48348d7f99b83092f0dbb95a8bbf1e5761e034"} Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.816572 4932 scope.go:117] "RemoveContainer" containerID="b6642bea573633cc66bab71232635859039f162c89d246e9fa1e165feea2033d" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.829841 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.859978 4932 scope.go:117] "RemoveContainer" containerID="756b423f6e2293ab4bed4703b8cf8e96f0700bffafc752fdd453ebc3e7a22e13" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.861206 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.881680 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:11 crc kubenswrapper[4932]: E0321 09:20:11.882161 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="ceilometer-notification-agent" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882182 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="ceilometer-notification-agent" Mar 21 09:20:11 crc kubenswrapper[4932]: E0321 09:20:11.882192 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f72379-7d9f-4c04-b560-ba0495427abd" containerName="oc" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882199 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f72379-7d9f-4c04-b560-ba0495427abd" containerName="oc" Mar 21 09:20:11 crc kubenswrapper[4932]: E0321 09:20:11.882221 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="proxy-httpd" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882228 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="proxy-httpd" Mar 21 09:20:11 crc kubenswrapper[4932]: E0321 09:20:11.882238 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="sg-core" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882243 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="sg-core" Mar 21 09:20:11 crc kubenswrapper[4932]: E0321 09:20:11.882262 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="ceilometer-central-agent" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882268 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="ceilometer-central-agent" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882460 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="sg-core" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882475 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="ceilometer-notification-agent" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882492 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f72379-7d9f-4c04-b560-ba0495427abd" containerName="oc" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882527 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="proxy-httpd" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.882538 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" containerName="ceilometer-central-agent" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.885210 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.896108 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.700417771 podStartE2EDuration="14.896078059s" podCreationTimestamp="2026-03-21 09:19:57 +0000 UTC" firstStartedPulling="2026-03-21 09:19:58.211204446 +0000 UTC m=+1301.806402705" lastFinishedPulling="2026-03-21 09:20:10.406864724 +0000 UTC m=+1314.002062993" observedRunningTime="2026-03-21 09:20:11.88631946 +0000 UTC m=+1315.481517729" watchObservedRunningTime="2026-03-21 09:20:11.896078059 +0000 UTC m=+1315.491276358" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.898055 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.898771 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.956630 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.956758 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5thhm\" (UniqueName: \"kubernetes.io/projected/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-kube-api-access-5thhm\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.956793 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.956814 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.956844 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-scripts\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.956867 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-config-data\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.956893 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.998117 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:11 crc kubenswrapper[4932]: I0321 09:20:11.998550 4932 scope.go:117] "RemoveContainer" containerID="a16fb5a511c142c5c67f984bbdb9c6bd0ba65e591d24fcecbf0a1f4d1b69aee6" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.060003 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5thhm\" (UniqueName: \"kubernetes.io/projected/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-kube-api-access-5thhm\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.060515 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.060705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.060912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-scripts\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.061065 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-config-data\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.061266 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.061569 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.065143 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.065624 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.070584 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.076923 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-config-data\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.085033 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-scripts\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.096448 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.101784 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5thhm\" (UniqueName: \"kubernetes.io/projected/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-kube-api-access-5thhm\") pod \"ceilometer-0\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.214148 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.746793 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:12 crc kubenswrapper[4932]: W0321 09:20:12.751313 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa1e7fd4_2ff2_431b_90c8_47bd257bd74a.slice/crio-2f8875696905fa718af8857f59b0d86f011572f1713db15d49cd84e8fab3e26d WatchSource:0}: Error finding container 2f8875696905fa718af8857f59b0d86f011572f1713db15d49cd84e8fab3e26d: Status 404 returned error can't find the container with id 2f8875696905fa718af8857f59b0d86f011572f1713db15d49cd84e8fab3e26d Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.821330 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerStarted","Data":"2f8875696905fa718af8857f59b0d86f011572f1713db15d49cd84e8fab3e26d"} Mar 21 09:20:12 crc kubenswrapper[4932]: I0321 09:20:12.828161 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55b6df9899-kzdvb" event={"ID":"13cb95e4-69d8-4acf-9b49-8da6aed86089","Type":"ContainerStarted","Data":"f5d8f554bf23e7672941d8b58826167ad10554cdcc3ef64314b4eb63eb66a6dd"} Mar 21 09:20:13 crc kubenswrapper[4932]: I0321 09:20:13.346881 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-55b6df9899-kzdvb" podStartSLOduration=10.346858261 podStartE2EDuration="10.346858261s" podCreationTimestamp="2026-03-21 09:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:12.858098983 +0000 UTC m=+1316.453297252" watchObservedRunningTime="2026-03-21 09:20:13.346858261 +0000 UTC m=+1316.942056530" Mar 21 09:20:13 crc kubenswrapper[4932]: I0321 09:20:13.375909 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:13 crc kubenswrapper[4932]: I0321 09:20:13.715097 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f6e1b0-2895-4f97-848f-80d39a53745f" path="/var/lib/kubelet/pods/f4f6e1b0-2895-4f97-848f-80d39a53745f/volumes" Mar 21 09:20:13 crc kubenswrapper[4932]: I0321 09:20:13.836532 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:13 crc kubenswrapper[4932]: I0321 09:20:13.837525 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:14 crc kubenswrapper[4932]: I0321 09:20:14.849563 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerStarted","Data":"e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e"} Mar 21 09:20:14 crc kubenswrapper[4932]: I0321 09:20:14.849907 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerStarted","Data":"4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f"} Mar 21 09:20:15 crc kubenswrapper[4932]: I0321 09:20:15.497964 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75cc58bddf-lsnvj" Mar 21 09:20:15 crc kubenswrapper[4932]: I0321 09:20:15.612615 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c449d5454-gtqfd"] Mar 21 09:20:15 crc kubenswrapper[4932]: I0321 09:20:15.612858 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c449d5454-gtqfd" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerName="neutron-api" containerID="cri-o://469b9b376b2ac8f658d73cfb4052b30fc41ddcfdd620a59f423d1d1a3ee68acb" gracePeriod=30 Mar 21 09:20:15 crc kubenswrapper[4932]: I0321 09:20:15.613303 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c449d5454-gtqfd" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerName="neutron-httpd" containerID="cri-o://46e1512eedc05e0dd2a4976c9a3689a30cf02368d23f5a4c0456d083c2aec769" gracePeriod=30 Mar 21 09:20:15 crc kubenswrapper[4932]: I0321 09:20:15.703075 4932 scope.go:117] "RemoveContainer" containerID="5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b" Mar 21 09:20:15 crc kubenswrapper[4932]: I0321 09:20:15.883606 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerStarted","Data":"fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392"} Mar 21 09:20:15 crc kubenswrapper[4932]: I0321 09:20:15.885949 4932 generic.go:334] "Generic (PLEG): container finished" podID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerID="46e1512eedc05e0dd2a4976c9a3689a30cf02368d23f5a4c0456d083c2aec769" exitCode=0 Mar 21 09:20:15 crc kubenswrapper[4932]: I0321 09:20:15.886916 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c449d5454-gtqfd" event={"ID":"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0","Type":"ContainerDied","Data":"46e1512eedc05e0dd2a4976c9a3689a30cf02368d23f5a4c0456d083c2aec769"} Mar 21 09:20:16 crc kubenswrapper[4932]: I0321 09:20:16.898568 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerStarted","Data":"f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f"} Mar 21 09:20:16 crc kubenswrapper[4932]: I0321 09:20:16.983068 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-86b8b664df-8nqhr" podUID="76427553-0e6c-4a84-820e-34fcfe6732a4" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.175:9696/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.740814 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.741167 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.914919 4932 generic.go:334] "Generic (PLEG): container finished" podID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerID="469b9b376b2ac8f658d73cfb4052b30fc41ddcfdd620a59f423d1d1a3ee68acb" exitCode=0 Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.914989 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c449d5454-gtqfd" event={"ID":"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0","Type":"ContainerDied","Data":"469b9b376b2ac8f658d73cfb4052b30fc41ddcfdd620a59f423d1d1a3ee68acb"} Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.920447 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerStarted","Data":"0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d"} Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.920611 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="ceilometer-central-agent" containerID="cri-o://4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f" gracePeriod=30 Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.920691 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.921074 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="proxy-httpd" containerID="cri-o://0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d" gracePeriod=30 Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.921120 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="sg-core" containerID="cri-o://fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392" gracePeriod=30 Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.921156 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="ceilometer-notification-agent" containerID="cri-o://e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e" gracePeriod=30 Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.943838 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.593830375 podStartE2EDuration="6.943814967s" podCreationTimestamp="2026-03-21 09:20:11 +0000 UTC" firstStartedPulling="2026-03-21 09:20:12.753963846 +0000 UTC m=+1316.349162115" lastFinishedPulling="2026-03-21 09:20:17.103948428 +0000 UTC m=+1320.699146707" observedRunningTime="2026-03-21 09:20:17.94105437 +0000 UTC m=+1321.536252639" watchObservedRunningTime="2026-03-21 09:20:17.943814967 +0000 UTC m=+1321.539013236" Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.948816 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:20:17 crc kubenswrapper[4932]: I0321 09:20:17.949868 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.288895 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.400083 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-ovndb-tls-certs\") pod \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.400444 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twqgh\" (UniqueName: \"kubernetes.io/projected/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-kube-api-access-twqgh\") pod \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.400698 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-config\") pod \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.400850 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-combined-ca-bundle\") pod \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.401055 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-httpd-config\") pod \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\" (UID: \"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0\") " Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.414181 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" (UID: "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.423576 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-kube-api-access-twqgh" (OuterVolumeSpecName: "kube-api-access-twqgh") pod "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" (UID: "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0"). InnerVolumeSpecName "kube-api-access-twqgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.511943 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.511982 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twqgh\" (UniqueName: \"kubernetes.io/projected/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-kube-api-access-twqgh\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.566537 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-config" (OuterVolumeSpecName: "config") pod "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" (UID: "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.615535 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" (UID: "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.615989 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.616011 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.624943 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" (UID: "befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.717602 4932 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.938161 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c449d5454-gtqfd" event={"ID":"befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0","Type":"ContainerDied","Data":"88e7a98395c36279da40b56c99d75035718a74445c083fb009d101cb26c6185e"} Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.938218 4932 scope.go:117] "RemoveContainer" containerID="46e1512eedc05e0dd2a4976c9a3689a30cf02368d23f5a4c0456d083c2aec769" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.938376 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c449d5454-gtqfd" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.948693 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-95ptw"] Mar 21 09:20:18 crc kubenswrapper[4932]: E0321 09:20:18.949252 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerName="neutron-httpd" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.949269 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerName="neutron-httpd" Mar 21 09:20:18 crc kubenswrapper[4932]: E0321 09:20:18.949317 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerName="neutron-api" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.949325 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerName="neutron-api" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.949568 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerName="neutron-httpd" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.949602 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" containerName="neutron-api" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.950589 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.961217 4932 generic.go:334] "Generic (PLEG): container finished" podID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerID="0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d" exitCode=0 Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.961245 4932 generic.go:334] "Generic (PLEG): container finished" podID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerID="fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392" exitCode=2 Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.961254 4932 generic.go:334] "Generic (PLEG): container finished" podID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerID="e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e" exitCode=0 Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.961814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerDied","Data":"0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d"} Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.961961 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerDied","Data":"fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392"} Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.962043 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerDied","Data":"e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e"} Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.962164 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-95ptw"] Mar 21 09:20:18 crc kubenswrapper[4932]: I0321 09:20:18.998635 4932 scope.go:117] "RemoveContainer" containerID="469b9b376b2ac8f658d73cfb4052b30fc41ddcfdd620a59f423d1d1a3ee68acb" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.001403 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c449d5454-gtqfd"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.024379 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c449d5454-gtqfd"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.026414 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619be3a-aba3-49f0-83e1-6c89fb5fca43-operator-scripts\") pod \"nova-api-db-create-95ptw\" (UID: \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\") " pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.026694 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p2rc\" (UniqueName: \"kubernetes.io/projected/c619be3a-aba3-49f0-83e1-6c89fb5fca43-kube-api-access-6p2rc\") pod \"nova-api-db-create-95ptw\" (UID: \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\") " pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.111509 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-czdbl"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.113255 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.129771 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q9j9\" (UniqueName: \"kubernetes.io/projected/3949704b-d11e-41e5-aa84-7df80e97aa8d-kube-api-access-6q9j9\") pod \"nova-cell0-db-create-czdbl\" (UID: \"3949704b-d11e-41e5-aa84-7df80e97aa8d\") " pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.129818 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949704b-d11e-41e5-aa84-7df80e97aa8d-operator-scripts\") pod \"nova-cell0-db-create-czdbl\" (UID: \"3949704b-d11e-41e5-aa84-7df80e97aa8d\") " pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.129949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619be3a-aba3-49f0-83e1-6c89fb5fca43-operator-scripts\") pod \"nova-api-db-create-95ptw\" (UID: \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\") " pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.130047 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p2rc\" (UniqueName: \"kubernetes.io/projected/c619be3a-aba3-49f0-83e1-6c89fb5fca43-kube-api-access-6p2rc\") pod \"nova-api-db-create-95ptw\" (UID: \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\") " pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.131235 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619be3a-aba3-49f0-83e1-6c89fb5fca43-operator-scripts\") pod \"nova-api-db-create-95ptw\" (UID: \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\") " pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.146752 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-czdbl"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.163828 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p2rc\" (UniqueName: \"kubernetes.io/projected/c619be3a-aba3-49f0-83e1-6c89fb5fca43-kube-api-access-6p2rc\") pod \"nova-api-db-create-95ptw\" (UID: \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\") " pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.231093 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q9j9\" (UniqueName: \"kubernetes.io/projected/3949704b-d11e-41e5-aa84-7df80e97aa8d-kube-api-access-6q9j9\") pod \"nova-cell0-db-create-czdbl\" (UID: \"3949704b-d11e-41e5-aa84-7df80e97aa8d\") " pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.231154 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949704b-d11e-41e5-aa84-7df80e97aa8d-operator-scripts\") pod \"nova-cell0-db-create-czdbl\" (UID: \"3949704b-d11e-41e5-aa84-7df80e97aa8d\") " pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.232126 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949704b-d11e-41e5-aa84-7df80e97aa8d-operator-scripts\") pod \"nova-cell0-db-create-czdbl\" (UID: \"3949704b-d11e-41e5-aa84-7df80e97aa8d\") " pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.259951 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q9j9\" (UniqueName: \"kubernetes.io/projected/3949704b-d11e-41e5-aa84-7df80e97aa8d-kube-api-access-6q9j9\") pod \"nova-cell0-db-create-czdbl\" (UID: \"3949704b-d11e-41e5-aa84-7df80e97aa8d\") " pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.273800 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-f6c8s"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.275504 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.305840 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.308182 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e3f5-account-create-update-hnwjt"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.310580 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.316664 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.318901 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.319711 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55b6df9899-kzdvb" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.327487 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f6c8s"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.332964 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9553f1cd-76cc-4b81-96e4-b5144f08a050-operator-scripts\") pod \"nova-api-e3f5-account-create-update-hnwjt\" (UID: \"9553f1cd-76cc-4b81-96e4-b5144f08a050\") " pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.333056 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s5b8\" (UniqueName: \"kubernetes.io/projected/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-kube-api-access-8s5b8\") pod \"nova-cell1-db-create-f6c8s\" (UID: \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\") " pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.333121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-operator-scripts\") pod \"nova-cell1-db-create-f6c8s\" (UID: \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\") " pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.333197 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27zlg\" (UniqueName: \"kubernetes.io/projected/9553f1cd-76cc-4b81-96e4-b5144f08a050-kube-api-access-27zlg\") pod \"nova-api-e3f5-account-create-update-hnwjt\" (UID: \"9553f1cd-76cc-4b81-96e4-b5144f08a050\") " pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.345772 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e3f5-account-create-update-hnwjt"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.383305 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fa88-account-create-update-xmhz8"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.384674 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.388861 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.405858 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fa88-account-create-update-xmhz8"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.415572 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.437712 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-operator-scripts\") pod \"nova-cell0-fa88-account-create-update-xmhz8\" (UID: \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\") " pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.437841 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27zlg\" (UniqueName: \"kubernetes.io/projected/9553f1cd-76cc-4b81-96e4-b5144f08a050-kube-api-access-27zlg\") pod \"nova-api-e3f5-account-create-update-hnwjt\" (UID: \"9553f1cd-76cc-4b81-96e4-b5144f08a050\") " pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.437904 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9553f1cd-76cc-4b81-96e4-b5144f08a050-operator-scripts\") pod \"nova-api-e3f5-account-create-update-hnwjt\" (UID: \"9553f1cd-76cc-4b81-96e4-b5144f08a050\") " pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.438021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s5b8\" (UniqueName: \"kubernetes.io/projected/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-kube-api-access-8s5b8\") pod \"nova-cell1-db-create-f6c8s\" (UID: \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\") " pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.438149 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-operator-scripts\") pod \"nova-cell1-db-create-f6c8s\" (UID: \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\") " pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.438184 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxbj6\" (UniqueName: \"kubernetes.io/projected/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-kube-api-access-hxbj6\") pod \"nova-cell0-fa88-account-create-update-xmhz8\" (UID: \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\") " pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.439149 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9553f1cd-76cc-4b81-96e4-b5144f08a050-operator-scripts\") pod \"nova-api-e3f5-account-create-update-hnwjt\" (UID: \"9553f1cd-76cc-4b81-96e4-b5144f08a050\") " pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.440268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-operator-scripts\") pod \"nova-cell1-db-create-f6c8s\" (UID: \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\") " pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.474335 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27zlg\" (UniqueName: \"kubernetes.io/projected/9553f1cd-76cc-4b81-96e4-b5144f08a050-kube-api-access-27zlg\") pod \"nova-api-e3f5-account-create-update-hnwjt\" (UID: \"9553f1cd-76cc-4b81-96e4-b5144f08a050\") " pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.476038 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s5b8\" (UniqueName: \"kubernetes.io/projected/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-kube-api-access-8s5b8\") pod \"nova-cell1-db-create-f6c8s\" (UID: \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\") " pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.509166 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.532827 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.540305 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxbj6\" (UniqueName: \"kubernetes.io/projected/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-kube-api-access-hxbj6\") pod \"nova-cell0-fa88-account-create-update-xmhz8\" (UID: \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\") " pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.540376 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-operator-scripts\") pod \"nova-cell0-fa88-account-create-update-xmhz8\" (UID: \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\") " pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.541601 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-operator-scripts\") pod \"nova-cell0-fa88-account-create-update-xmhz8\" (UID: \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\") " pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.570048 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxbj6\" (UniqueName: \"kubernetes.io/projected/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-kube-api-access-hxbj6\") pod \"nova-cell0-fa88-account-create-update-xmhz8\" (UID: \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\") " pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.576802 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7481-account-create-update-lt9rh"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.578186 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.583021 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.592921 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.607875 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7481-account-create-update-lt9rh"] Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.644640 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-operator-scripts\") pod \"nova-cell1-7481-account-create-update-lt9rh\" (UID: \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\") " pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.644838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z296n\" (UniqueName: \"kubernetes.io/projected/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-kube-api-access-z296n\") pod \"nova-cell1-7481-account-create-update-lt9rh\" (UID: \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\") " pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.747815 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z296n\" (UniqueName: \"kubernetes.io/projected/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-kube-api-access-z296n\") pod \"nova-cell1-7481-account-create-update-lt9rh\" (UID: \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\") " pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.748069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-operator-scripts\") pod \"nova-cell1-7481-account-create-update-lt9rh\" (UID: \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\") " pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.749031 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-operator-scripts\") pod \"nova-cell1-7481-account-create-update-lt9rh\" (UID: \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\") " pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.766332 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0" path="/var/lib/kubelet/pods/befdde98-8e3c-4fa1-b4bc-8c5b4896e1e0/volumes" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.791090 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z296n\" (UniqueName: \"kubernetes.io/projected/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-kube-api-access-z296n\") pod \"nova-cell1-7481-account-create-update-lt9rh\" (UID: \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\") " pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.942853 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:19 crc kubenswrapper[4932]: I0321 09:20:19.985608 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-95ptw"] Mar 21 09:20:20 crc kubenswrapper[4932]: W0321 09:20:19.998818 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc619be3a_aba3_49f0_83e1_6c89fb5fca43.slice/crio-63f728e36097f536b8069dba8ac1c1396f2e2ba744419b848aecab49ae7fa7e1 WatchSource:0}: Error finding container 63f728e36097f536b8069dba8ac1c1396f2e2ba744419b848aecab49ae7fa7e1: Status 404 returned error can't find the container with id 63f728e36097f536b8069dba8ac1c1396f2e2ba744419b848aecab49ae7fa7e1 Mar 21 09:20:20 crc kubenswrapper[4932]: I0321 09:20:20.031562 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e3f5-account-create-update-hnwjt"] Mar 21 09:20:20 crc kubenswrapper[4932]: W0321 09:20:20.087950 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9553f1cd_76cc_4b81_96e4_b5144f08a050.slice/crio-7c45cc5e1fb4e54df22505ed596dd6e4a0bf1ff2dcbeb81ed145595920c1214d WatchSource:0}: Error finding container 7c45cc5e1fb4e54df22505ed596dd6e4a0bf1ff2dcbeb81ed145595920c1214d: Status 404 returned error can't find the container with id 7c45cc5e1fb4e54df22505ed596dd6e4a0bf1ff2dcbeb81ed145595920c1214d Mar 21 09:20:20 crc kubenswrapper[4932]: I0321 09:20:20.313425 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f6c8s"] Mar 21 09:20:20 crc kubenswrapper[4932]: I0321 09:20:20.328426 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fa88-account-create-update-xmhz8"] Mar 21 09:20:20 crc kubenswrapper[4932]: I0321 09:20:20.362551 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-czdbl"] Mar 21 09:20:20 crc kubenswrapper[4932]: I0321 09:20:20.607464 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7481-account-create-update-lt9rh"] Mar 21 09:20:20 crc kubenswrapper[4932]: I0321 09:20:20.859415 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:20 crc kubenswrapper[4932]: I0321 09:20:20.859792 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:20 crc kubenswrapper[4932]: I0321 09:20:20.901517 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.054605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7481-account-create-update-lt9rh" event={"ID":"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa","Type":"ContainerStarted","Data":"38cd268c9d562341ca98dbddd5d07b09ce41b689f069f3090304122945694548"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.107757 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" event={"ID":"9553f1cd-76cc-4b81-96e4-b5144f08a050","Type":"ContainerStarted","Data":"0303fbd4dbfdd4cd49b4c476697682d2de71354978522e8b92c9b504556c76af"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.107812 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" event={"ID":"9553f1cd-76cc-4b81-96e4-b5144f08a050","Type":"ContainerStarted","Data":"7c45cc5e1fb4e54df22505ed596dd6e4a0bf1ff2dcbeb81ed145595920c1214d"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.139756 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-95ptw" event={"ID":"c619be3a-aba3-49f0-83e1-6c89fb5fca43","Type":"ContainerStarted","Data":"80103d4519b3e81df4085002d0ada9b777104acb945efe8eb58b2ca3d0a08302"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.139806 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-95ptw" event={"ID":"c619be3a-aba3-49f0-83e1-6c89fb5fca43","Type":"ContainerStarted","Data":"63f728e36097f536b8069dba8ac1c1396f2e2ba744419b848aecab49ae7fa7e1"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.155556 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" event={"ID":"7ef3c2b4-8d78-4753-8a9c-b3fca108d873","Type":"ContainerStarted","Data":"1ef68aa312ce21469698dc3310b5b66a48219c39f21d31508c20de6177ea4b36"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.155606 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" event={"ID":"7ef3c2b4-8d78-4753-8a9c-b3fca108d873","Type":"ContainerStarted","Data":"f71f0a1a1a8620f8c9e8ae739879c068bde274302ec89dd6426a43e6ecc9ec73"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.171808 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" podStartSLOduration=2.1717855520000002 podStartE2EDuration="2.171785552s" podCreationTimestamp="2026-03-21 09:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:21.144734218 +0000 UTC m=+1324.739932487" watchObservedRunningTime="2026-03-21 09:20:21.171785552 +0000 UTC m=+1324.766983821" Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.180243 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-czdbl" event={"ID":"3949704b-d11e-41e5-aa84-7df80e97aa8d","Type":"ContainerStarted","Data":"7e1f3848cb3279719d67772dfc860f20901f4bd99c471c38dc47d4917c90baef"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.180288 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-czdbl" event={"ID":"3949704b-d11e-41e5-aa84-7df80e97aa8d","Type":"ContainerStarted","Data":"3091b1aa3a52ece8c1b88bb3fbdcefef0465c8352c449f83fef0aa4f3834d94f"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.198962 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-95ptw" podStartSLOduration=3.198941669 podStartE2EDuration="3.198941669s" podCreationTimestamp="2026-03-21 09:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:21.170296655 +0000 UTC m=+1324.765494924" watchObservedRunningTime="2026-03-21 09:20:21.198941669 +0000 UTC m=+1324.794139938" Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.204550 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f6c8s" event={"ID":"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995","Type":"ContainerStarted","Data":"932a1c6d409e9692b2789416fb776e48b4ac43bcef769fb3dcf2fd525303f290"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.204617 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f6c8s" event={"ID":"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995","Type":"ContainerStarted","Data":"06dc1326a168adb264e5e7b3589dc82403a6cbfa8b07e018696d56b16aae5a4f"} Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.249412 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" podStartSLOduration=2.249389581 podStartE2EDuration="2.249389581s" podCreationTimestamp="2026-03-21 09:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:21.236073411 +0000 UTC m=+1324.831271700" watchObservedRunningTime="2026-03-21 09:20:21.249389581 +0000 UTC m=+1324.844587850" Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.270777 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-czdbl" podStartSLOduration=2.270748435 podStartE2EDuration="2.270748435s" podCreationTimestamp="2026-03-21 09:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:21.257658042 +0000 UTC m=+1324.852856311" watchObservedRunningTime="2026-03-21 09:20:21.270748435 +0000 UTC m=+1324.865946704" Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.296069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-f6c8s" podStartSLOduration=2.296038943 podStartE2EDuration="2.296038943s" podCreationTimestamp="2026-03-21 09:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:21.274230025 +0000 UTC m=+1324.869428294" watchObservedRunningTime="2026-03-21 09:20:21.296038943 +0000 UTC m=+1324.891237212" Mar 21 09:20:21 crc kubenswrapper[4932]: I0321 09:20:21.330956 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:21 crc kubenswrapper[4932]: E0321 09:20:21.387061 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc619be3a_aba3_49f0_83e1_6c89fb5fca43.slice/crio-conmon-80103d4519b3e81df4085002d0ada9b777104acb945efe8eb58b2ca3d0a08302.scope\": RecentStats: unable to find data in memory cache]" Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.218184 4932 generic.go:334] "Generic (PLEG): container finished" podID="7528d17d-bb05-4ebe-8e3f-1ad0a8caf995" containerID="932a1c6d409e9692b2789416fb776e48b4ac43bcef769fb3dcf2fd525303f290" exitCode=0 Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.218280 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f6c8s" event={"ID":"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995","Type":"ContainerDied","Data":"932a1c6d409e9692b2789416fb776e48b4ac43bcef769fb3dcf2fd525303f290"} Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.222215 4932 generic.go:334] "Generic (PLEG): container finished" podID="3e220f5c-e5d4-40b8-9cc0-c650073c3dfa" containerID="c8afc9ae6a57fda89f75a0cfc85b41a583e1009b760eebeb550b5bcbec71267e" exitCode=0 Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.222309 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7481-account-create-update-lt9rh" event={"ID":"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa","Type":"ContainerDied","Data":"c8afc9ae6a57fda89f75a0cfc85b41a583e1009b760eebeb550b5bcbec71267e"} Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.224975 4932 generic.go:334] "Generic (PLEG): container finished" podID="9553f1cd-76cc-4b81-96e4-b5144f08a050" containerID="0303fbd4dbfdd4cd49b4c476697682d2de71354978522e8b92c9b504556c76af" exitCode=0 Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.225063 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" event={"ID":"9553f1cd-76cc-4b81-96e4-b5144f08a050","Type":"ContainerDied","Data":"0303fbd4dbfdd4cd49b4c476697682d2de71354978522e8b92c9b504556c76af"} Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.226983 4932 generic.go:334] "Generic (PLEG): container finished" podID="c619be3a-aba3-49f0-83e1-6c89fb5fca43" containerID="80103d4519b3e81df4085002d0ada9b777104acb945efe8eb58b2ca3d0a08302" exitCode=0 Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.227030 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-95ptw" event={"ID":"c619be3a-aba3-49f0-83e1-6c89fb5fca43","Type":"ContainerDied","Data":"80103d4519b3e81df4085002d0ada9b777104acb945efe8eb58b2ca3d0a08302"} Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.228650 4932 generic.go:334] "Generic (PLEG): container finished" podID="7ef3c2b4-8d78-4753-8a9c-b3fca108d873" containerID="1ef68aa312ce21469698dc3310b5b66a48219c39f21d31508c20de6177ea4b36" exitCode=0 Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.228699 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" event={"ID":"7ef3c2b4-8d78-4753-8a9c-b3fca108d873","Type":"ContainerDied","Data":"1ef68aa312ce21469698dc3310b5b66a48219c39f21d31508c20de6177ea4b36"} Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.230174 4932 generic.go:334] "Generic (PLEG): container finished" podID="3949704b-d11e-41e5-aa84-7df80e97aa8d" containerID="7e1f3848cb3279719d67772dfc860f20901f4bd99c471c38dc47d4917c90baef" exitCode=0 Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.230261 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-czdbl" event={"ID":"3949704b-d11e-41e5-aa84-7df80e97aa8d","Type":"ContainerDied","Data":"7e1f3848cb3279719d67772dfc860f20901f4bd99c471c38dc47d4917c90baef"} Mar 21 09:20:22 crc kubenswrapper[4932]: I0321 09:20:22.400303 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.243560 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="0c5a762b3cf40820ecdcf5d64c48348d7f99b83092f0dbb95a8bbf1e5761e034" exitCode=1 Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.243639 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"0c5a762b3cf40820ecdcf5d64c48348d7f99b83092f0dbb95a8bbf1e5761e034"} Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.243826 4932 scope.go:117] "RemoveContainer" containerID="8d6f9754fbea1a232ea8bbdf7c0ef42bcf6a10b608d6857fe9fe0b45a90abd58" Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.244746 4932 scope.go:117] "RemoveContainer" containerID="0c5a762b3cf40820ecdcf5d64c48348d7f99b83092f0dbb95a8bbf1e5761e034" Mar 21 09:20:23 crc kubenswrapper[4932]: E0321 09:20:23.251933 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.252575 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="5d94ad5b1a7c95d5cf3307c44274ad099eb4f2d593f6cb5b846c4cb6707fc081" exitCode=1 Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.252764 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"5d94ad5b1a7c95d5cf3307c44274ad099eb4f2d593f6cb5b846c4cb6707fc081"} Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.253057 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" containerID="cri-o://f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f" gracePeriod=30 Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.254151 4932 scope.go:117] "RemoveContainer" containerID="5d94ad5b1a7c95d5cf3307c44274ad099eb4f2d593f6cb5b846c4cb6707fc081" Mar 21 09:20:23 crc kubenswrapper[4932]: E0321 09:20:23.254449 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.554829 4932 scope.go:117] "RemoveContainer" containerID="3234899d958e8366bd98f5bc43117139669b327d810f81e5f97899b5fda1f74c" Mar 21 09:20:23 crc kubenswrapper[4932]: I0321 09:20:23.908880 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.082976 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-operator-scripts\") pod \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\" (UID: \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.083380 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s5b8\" (UniqueName: \"kubernetes.io/projected/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-kube-api-access-8s5b8\") pod \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\" (UID: \"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.084292 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7528d17d-bb05-4ebe-8e3f-1ad0a8caf995" (UID: "7528d17d-bb05-4ebe-8e3f-1ad0a8caf995"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.099920 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-kube-api-access-8s5b8" (OuterVolumeSpecName: "kube-api-access-8s5b8") pod "7528d17d-bb05-4ebe-8e3f-1ad0a8caf995" (UID: "7528d17d-bb05-4ebe-8e3f-1ad0a8caf995"). InnerVolumeSpecName "kube-api-access-8s5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.185470 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.185506 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s5b8\" (UniqueName: \"kubernetes.io/projected/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995-kube-api-access-8s5b8\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.189210 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.195167 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.202455 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.213286 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.235671 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.277181 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7481-account-create-update-lt9rh" event={"ID":"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa","Type":"ContainerDied","Data":"38cd268c9d562341ca98dbddd5d07b09ce41b689f069f3090304122945694548"} Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.277542 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38cd268c9d562341ca98dbddd5d07b09ce41b689f069f3090304122945694548" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.277624 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7481-account-create-update-lt9rh" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287021 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27zlg\" (UniqueName: \"kubernetes.io/projected/9553f1cd-76cc-4b81-96e4-b5144f08a050-kube-api-access-27zlg\") pod \"9553f1cd-76cc-4b81-96e4-b5144f08a050\" (UID: \"9553f1cd-76cc-4b81-96e4-b5144f08a050\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287063 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p2rc\" (UniqueName: \"kubernetes.io/projected/c619be3a-aba3-49f0-83e1-6c89fb5fca43-kube-api-access-6p2rc\") pod \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\" (UID: \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287103 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949704b-d11e-41e5-aa84-7df80e97aa8d-operator-scripts\") pod \"3949704b-d11e-41e5-aa84-7df80e97aa8d\" (UID: \"3949704b-d11e-41e5-aa84-7df80e97aa8d\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287122 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-operator-scripts\") pod \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\" (UID: \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287162 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-operator-scripts\") pod \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\" (UID: \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287196 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z296n\" (UniqueName: \"kubernetes.io/projected/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-kube-api-access-z296n\") pod \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\" (UID: \"3e220f5c-e5d4-40b8-9cc0-c650073c3dfa\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287249 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9553f1cd-76cc-4b81-96e4-b5144f08a050-operator-scripts\") pod \"9553f1cd-76cc-4b81-96e4-b5144f08a050\" (UID: \"9553f1cd-76cc-4b81-96e4-b5144f08a050\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287277 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q9j9\" (UniqueName: \"kubernetes.io/projected/3949704b-d11e-41e5-aa84-7df80e97aa8d-kube-api-access-6q9j9\") pod \"3949704b-d11e-41e5-aa84-7df80e97aa8d\" (UID: \"3949704b-d11e-41e5-aa84-7df80e97aa8d\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287316 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxbj6\" (UniqueName: \"kubernetes.io/projected/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-kube-api-access-hxbj6\") pod \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\" (UID: \"7ef3c2b4-8d78-4753-8a9c-b3fca108d873\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287336 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619be3a-aba3-49f0-83e1-6c89fb5fca43-operator-scripts\") pod \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\" (UID: \"c619be3a-aba3-49f0-83e1-6c89fb5fca43\") " Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.287935 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c619be3a-aba3-49f0-83e1-6c89fb5fca43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c619be3a-aba3-49f0-83e1-6c89fb5fca43" (UID: "c619be3a-aba3-49f0-83e1-6c89fb5fca43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.288658 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e220f5c-e5d4-40b8-9cc0-c650073c3dfa" (UID: "3e220f5c-e5d4-40b8-9cc0-c650073c3dfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.291594 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9553f1cd-76cc-4b81-96e4-b5144f08a050-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9553f1cd-76cc-4b81-96e4-b5144f08a050" (UID: "9553f1cd-76cc-4b81-96e4-b5144f08a050"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.295269 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ef3c2b4-8d78-4753-8a9c-b3fca108d873" (UID: "7ef3c2b4-8d78-4753-8a9c-b3fca108d873"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.295425 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9553f1cd-76cc-4b81-96e4-b5144f08a050-kube-api-access-27zlg" (OuterVolumeSpecName: "kube-api-access-27zlg") pod "9553f1cd-76cc-4b81-96e4-b5144f08a050" (UID: "9553f1cd-76cc-4b81-96e4-b5144f08a050"). InnerVolumeSpecName "kube-api-access-27zlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.296298 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" event={"ID":"9553f1cd-76cc-4b81-96e4-b5144f08a050","Type":"ContainerDied","Data":"7c45cc5e1fb4e54df22505ed596dd6e4a0bf1ff2dcbeb81ed145595920c1214d"} Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.296327 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c45cc5e1fb4e54df22505ed596dd6e4a0bf1ff2dcbeb81ed145595920c1214d" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.296416 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3f5-account-create-update-hnwjt" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.298128 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3949704b-d11e-41e5-aa84-7df80e97aa8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3949704b-d11e-41e5-aa84-7df80e97aa8d" (UID: "3949704b-d11e-41e5-aa84-7df80e97aa8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.304635 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-95ptw" event={"ID":"c619be3a-aba3-49f0-83e1-6c89fb5fca43","Type":"ContainerDied","Data":"63f728e36097f536b8069dba8ac1c1396f2e2ba744419b848aecab49ae7fa7e1"} Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.304680 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f728e36097f536b8069dba8ac1c1396f2e2ba744419b848aecab49ae7fa7e1" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.304792 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-95ptw" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.314294 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-kube-api-access-z296n" (OuterVolumeSpecName: "kube-api-access-z296n") pod "3e220f5c-e5d4-40b8-9cc0-c650073c3dfa" (UID: "3e220f5c-e5d4-40b8-9cc0-c650073c3dfa"). InnerVolumeSpecName "kube-api-access-z296n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.326778 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" event={"ID":"7ef3c2b4-8d78-4753-8a9c-b3fca108d873","Type":"ContainerDied","Data":"f71f0a1a1a8620f8c9e8ae739879c068bde274302ec89dd6426a43e6ecc9ec73"} Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.327125 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f71f0a1a1a8620f8c9e8ae739879c068bde274302ec89dd6426a43e6ecc9ec73" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.327298 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fa88-account-create-update-xmhz8" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.331002 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-czdbl" event={"ID":"3949704b-d11e-41e5-aa84-7df80e97aa8d","Type":"ContainerDied","Data":"3091b1aa3a52ece8c1b88bb3fbdcefef0465c8352c449f83fef0aa4f3834d94f"} Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.331053 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3091b1aa3a52ece8c1b88bb3fbdcefef0465c8352c449f83fef0aa4f3834d94f" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.331155 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-czdbl" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.344666 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3949704b-d11e-41e5-aa84-7df80e97aa8d-kube-api-access-6q9j9" (OuterVolumeSpecName: "kube-api-access-6q9j9") pod "3949704b-d11e-41e5-aa84-7df80e97aa8d" (UID: "3949704b-d11e-41e5-aa84-7df80e97aa8d"). InnerVolumeSpecName "kube-api-access-6q9j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.350126 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f6c8s" event={"ID":"7528d17d-bb05-4ebe-8e3f-1ad0a8caf995","Type":"ContainerDied","Data":"06dc1326a168adb264e5e7b3589dc82403a6cbfa8b07e018696d56b16aae5a4f"} Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.350938 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06dc1326a168adb264e5e7b3589dc82403a6cbfa8b07e018696d56b16aae5a4f" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.351109 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f6c8s" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.356294 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-kube-api-access-hxbj6" (OuterVolumeSpecName: "kube-api-access-hxbj6") pod "7ef3c2b4-8d78-4753-8a9c-b3fca108d873" (UID: "7ef3c2b4-8d78-4753-8a9c-b3fca108d873"). InnerVolumeSpecName "kube-api-access-hxbj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.356927 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c619be3a-aba3-49f0-83e1-6c89fb5fca43-kube-api-access-6p2rc" (OuterVolumeSpecName: "kube-api-access-6p2rc") pod "c619be3a-aba3-49f0-83e1-6c89fb5fca43" (UID: "c619be3a-aba3-49f0-83e1-6c89fb5fca43"). InnerVolumeSpecName "kube-api-access-6p2rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.391489 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxbj6\" (UniqueName: \"kubernetes.io/projected/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-kube-api-access-hxbj6\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.391750 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c619be3a-aba3-49f0-83e1-6c89fb5fca43-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.391816 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27zlg\" (UniqueName: \"kubernetes.io/projected/9553f1cd-76cc-4b81-96e4-b5144f08a050-kube-api-access-27zlg\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.391904 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p2rc\" (UniqueName: \"kubernetes.io/projected/c619be3a-aba3-49f0-83e1-6c89fb5fca43-kube-api-access-6p2rc\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.391969 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3949704b-d11e-41e5-aa84-7df80e97aa8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.392034 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef3c2b4-8d78-4753-8a9c-b3fca108d873-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.392120 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.392179 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z296n\" (UniqueName: \"kubernetes.io/projected/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa-kube-api-access-z296n\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.392244 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9553f1cd-76cc-4b81-96e4-b5144f08a050-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.392936 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q9j9\" (UniqueName: \"kubernetes.io/projected/3949704b-d11e-41e5-aa84-7df80e97aa8d-kube-api-access-6q9j9\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.422629 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.422895 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7a277844-3590-4db2-af83-026af5697238" containerName="glance-log" containerID="cri-o://c98b72ac22bf47c15785ca8a84c929bbf86dc057b72ddce5fa68e63115a8655e" gracePeriod=30 Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.423061 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7a277844-3590-4db2-af83-026af5697238" containerName="glance-httpd" containerID="cri-o://7d7ad18f2d1042866e797998584f25d42110333772132e59bf304edec5f5d997" gracePeriod=30 Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.866753 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.866989 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="95c94c46-fec1-499c-8ae2-aab0899f87df" containerName="watcher-applier" containerID="cri-o://38ce93587878f1ae6cf2bfccda3c2a41d84bba8bf5991fe74757209456906f55" gracePeriod=30 Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.931861 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.932133 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api-log" containerID="cri-o://e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce" gracePeriod=30 Mar 21 09:20:24 crc kubenswrapper[4932]: I0321 09:20:24.932249 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api" containerID="cri-o://36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8" gracePeriod=30 Mar 21 09:20:25 crc kubenswrapper[4932]: I0321 09:20:25.370329 4932 generic.go:334] "Generic (PLEG): container finished" podID="7a277844-3590-4db2-af83-026af5697238" containerID="7d7ad18f2d1042866e797998584f25d42110333772132e59bf304edec5f5d997" exitCode=0 Mar 21 09:20:25 crc kubenswrapper[4932]: I0321 09:20:25.370642 4932 generic.go:334] "Generic (PLEG): container finished" podID="7a277844-3590-4db2-af83-026af5697238" containerID="c98b72ac22bf47c15785ca8a84c929bbf86dc057b72ddce5fa68e63115a8655e" exitCode=143 Mar 21 09:20:25 crc kubenswrapper[4932]: I0321 09:20:25.370432 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a277844-3590-4db2-af83-026af5697238","Type":"ContainerDied","Data":"7d7ad18f2d1042866e797998584f25d42110333772132e59bf304edec5f5d997"} Mar 21 09:20:25 crc kubenswrapper[4932]: I0321 09:20:25.370748 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a277844-3590-4db2-af83-026af5697238","Type":"ContainerDied","Data":"c98b72ac22bf47c15785ca8a84c929bbf86dc057b72ddce5fa68e63115a8655e"} Mar 21 09:20:25 crc kubenswrapper[4932]: I0321 09:20:25.384131 4932 generic.go:334] "Generic (PLEG): container finished" podID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerID="e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce" exitCode=143 Mar 21 09:20:25 crc kubenswrapper[4932]: I0321 09:20:25.384191 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"45de1997-45e0-4e64-b3a3-4d9e9debfb39","Type":"ContainerDied","Data":"e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce"} Mar 21 09:20:25 crc kubenswrapper[4932]: I0321 09:20:25.967696 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.036705 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38ce93587878f1ae6cf2bfccda3c2a41d84bba8bf5991fe74757209456906f55" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.045746 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38ce93587878f1ae6cf2bfccda3c2a41d84bba8bf5991fe74757209456906f55" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.095794 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38ce93587878f1ae6cf2bfccda3c2a41d84bba8bf5991fe74757209456906f55" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.095880 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="95c94c46-fec1-499c-8ae2-aab0899f87df" containerName="watcher-applier" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.140090 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqr25\" (UniqueName: \"kubernetes.io/projected/7a277844-3590-4db2-af83-026af5697238-kube-api-access-lqr25\") pod \"7a277844-3590-4db2-af83-026af5697238\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.140164 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-logs\") pod \"7a277844-3590-4db2-af83-026af5697238\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.140190 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-scripts\") pod \"7a277844-3590-4db2-af83-026af5697238\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.140212 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"7a277844-3590-4db2-af83-026af5697238\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.140287 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-httpd-run\") pod \"7a277844-3590-4db2-af83-026af5697238\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.140458 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-config-data\") pod \"7a277844-3590-4db2-af83-026af5697238\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.140527 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-combined-ca-bundle\") pod \"7a277844-3590-4db2-af83-026af5697238\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.140577 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-public-tls-certs\") pod \"7a277844-3590-4db2-af83-026af5697238\" (UID: \"7a277844-3590-4db2-af83-026af5697238\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.145891 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-logs" (OuterVolumeSpecName: "logs") pod "7a277844-3590-4db2-af83-026af5697238" (UID: "7a277844-3590-4db2-af83-026af5697238"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.158249 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7a277844-3590-4db2-af83-026af5697238" (UID: "7a277844-3590-4db2-af83-026af5697238"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.174628 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a277844-3590-4db2-af83-026af5697238-kube-api-access-lqr25" (OuterVolumeSpecName: "kube-api-access-lqr25") pod "7a277844-3590-4db2-af83-026af5697238" (UID: "7a277844-3590-4db2-af83-026af5697238"). InnerVolumeSpecName "kube-api-access-lqr25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.174646 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "7a277844-3590-4db2-af83-026af5697238" (UID: "7a277844-3590-4db2-af83-026af5697238"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.198242 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-scripts" (OuterVolumeSpecName: "scripts") pod "7a277844-3590-4db2-af83-026af5697238" (UID: "7a277844-3590-4db2-af83-026af5697238"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.212505 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a277844-3590-4db2-af83-026af5697238" (UID: "7a277844-3590-4db2-af83-026af5697238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.242762 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqr25\" (UniqueName: \"kubernetes.io/projected/7a277844-3590-4db2-af83-026af5697238-kube-api-access-lqr25\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.242793 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.242803 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.242825 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.242834 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a277844-3590-4db2-af83-026af5697238-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.242842 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.270520 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.288457 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7a277844-3590-4db2-af83-026af5697238" (UID: "7a277844-3590-4db2-af83-026af5697238"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.290928 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-config-data" (OuterVolumeSpecName: "config-data") pod "7a277844-3590-4db2-af83-026af5697238" (UID: "7a277844-3590-4db2-af83-026af5697238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.306655 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347127 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-internal-tls-certs\") pod \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347275 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-public-tls-certs\") pod \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347360 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-custom-prometheus-ca\") pod \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347387 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45de1997-45e0-4e64-b3a3-4d9e9debfb39-logs\") pod \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347414 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-config-data\") pod \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347434 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-combined-ca-bundle\") pod \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347457 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rljn\" (UniqueName: \"kubernetes.io/projected/45de1997-45e0-4e64-b3a3-4d9e9debfb39-kube-api-access-4rljn\") pod \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\" (UID: \"45de1997-45e0-4e64-b3a3-4d9e9debfb39\") " Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347735 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347754 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a277844-3590-4db2-af83-026af5697238-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.347765 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.354543 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45de1997-45e0-4e64-b3a3-4d9e9debfb39-logs" (OuterVolumeSpecName: "logs") pod "45de1997-45e0-4e64-b3a3-4d9e9debfb39" (UID: "45de1997-45e0-4e64-b3a3-4d9e9debfb39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.367165 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45de1997-45e0-4e64-b3a3-4d9e9debfb39-kube-api-access-4rljn" (OuterVolumeSpecName: "kube-api-access-4rljn") pod "45de1997-45e0-4e64-b3a3-4d9e9debfb39" (UID: "45de1997-45e0-4e64-b3a3-4d9e9debfb39"). InnerVolumeSpecName "kube-api-access-4rljn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.399750 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "45de1997-45e0-4e64-b3a3-4d9e9debfb39" (UID: "45de1997-45e0-4e64-b3a3-4d9e9debfb39"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.404315 4932 generic.go:334] "Generic (PLEG): container finished" podID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerID="36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8" exitCode=0 Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.404414 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"45de1997-45e0-4e64-b3a3-4d9e9debfb39","Type":"ContainerDied","Data":"36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8"} Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.404453 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"45de1997-45e0-4e64-b3a3-4d9e9debfb39","Type":"ContainerDied","Data":"69f6ae80444a25f1501d5ff5f267cd63122441fe285b6b3920fec9a625cecbe0"} Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.404476 4932 scope.go:117] "RemoveContainer" containerID="36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.404652 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.412797 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a277844-3590-4db2-af83-026af5697238","Type":"ContainerDied","Data":"66179f9dfe0fd3f21cf39ff2e37a519efdd610f5966e1823462bd5983219de6f"} Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.412888 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.448940 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.448978 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45de1997-45e0-4e64-b3a3-4d9e9debfb39-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.448992 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rljn\" (UniqueName: \"kubernetes.io/projected/45de1997-45e0-4e64-b3a3-4d9e9debfb39-kube-api-access-4rljn\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.465752 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "45de1997-45e0-4e64-b3a3-4d9e9debfb39" (UID: "45de1997-45e0-4e64-b3a3-4d9e9debfb39"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.472937 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45de1997-45e0-4e64-b3a3-4d9e9debfb39" (UID: "45de1997-45e0-4e64-b3a3-4d9e9debfb39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.478828 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "45de1997-45e0-4e64-b3a3-4d9e9debfb39" (UID: "45de1997-45e0-4e64-b3a3-4d9e9debfb39"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.512771 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-config-data" (OuterVolumeSpecName: "config-data") pod "45de1997-45e0-4e64-b3a3-4d9e9debfb39" (UID: "45de1997-45e0-4e64-b3a3-4d9e9debfb39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.550992 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.551031 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.551063 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.551073 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45de1997-45e0-4e64-b3a3-4d9e9debfb39-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.633872 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.639453 4932 scope.go:117] "RemoveContainer" containerID="e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.648398 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.664399 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.664873 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef3c2b4-8d78-4753-8a9c-b3fca108d873" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.664890 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef3c2b4-8d78-4753-8a9c-b3fca108d873" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.664907 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.664913 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.664923 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c619be3a-aba3-49f0-83e1-6c89fb5fca43" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.664930 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c619be3a-aba3-49f0-83e1-6c89fb5fca43" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.664940 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9553f1cd-76cc-4b81-96e4-b5144f08a050" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.664953 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9553f1cd-76cc-4b81-96e4-b5144f08a050" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.664972 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3949704b-d11e-41e5-aa84-7df80e97aa8d" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.664978 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3949704b-d11e-41e5-aa84-7df80e97aa8d" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.664988 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api-log" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.664994 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api-log" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.665006 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a277844-3590-4db2-af83-026af5697238" containerName="glance-httpd" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665012 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a277844-3590-4db2-af83-026af5697238" containerName="glance-httpd" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.665030 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7528d17d-bb05-4ebe-8e3f-1ad0a8caf995" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665038 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7528d17d-bb05-4ebe-8e3f-1ad0a8caf995" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.665055 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a277844-3590-4db2-af83-026af5697238" containerName="glance-log" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665060 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a277844-3590-4db2-af83-026af5697238" containerName="glance-log" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.665076 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e220f5c-e5d4-40b8-9cc0-c650073c3dfa" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665082 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e220f5c-e5d4-40b8-9cc0-c650073c3dfa" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665279 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665291 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3949704b-d11e-41e5-aa84-7df80e97aa8d" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665298 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9553f1cd-76cc-4b81-96e4-b5144f08a050" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665310 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a277844-3590-4db2-af83-026af5697238" containerName="glance-httpd" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665322 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c619be3a-aba3-49f0-83e1-6c89fb5fca43" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665340 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" containerName="watcher-api-log" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665366 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7528d17d-bb05-4ebe-8e3f-1ad0a8caf995" containerName="mariadb-database-create" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665377 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e220f5c-e5d4-40b8-9cc0-c650073c3dfa" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665388 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a277844-3590-4db2-af83-026af5697238" containerName="glance-log" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.665395 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef3c2b4-8d78-4753-8a9c-b3fca108d873" containerName="mariadb-account-create-update" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.666519 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.666788 4932 scope.go:117] "RemoveContainer" containerID="36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.667488 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8\": container with ID starting with 36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8 not found: ID does not exist" containerID="36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.667532 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8"} err="failed to get container status \"36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8\": rpc error: code = NotFound desc = could not find container \"36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8\": container with ID starting with 36ac294c58c68626e47e980fbe97ff9d9512ef2b564d8fa22a537ebdaaf7d8b8 not found: ID does not exist" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.667563 4932 scope.go:117] "RemoveContainer" containerID="e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce" Mar 21 09:20:26 crc kubenswrapper[4932]: E0321 09:20:26.667919 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce\": container with ID starting with e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce not found: ID does not exist" containerID="e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.667953 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce"} err="failed to get container status \"e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce\": rpc error: code = NotFound desc = could not find container \"e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce\": container with ID starting with e6ab262f554a154b683edeec66d4aa7a66cb826c9806f892c1a49fa28fe9dcce not found: ID does not exist" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.667967 4932 scope.go:117] "RemoveContainer" containerID="7d7ad18f2d1042866e797998584f25d42110333772132e59bf304edec5f5d997" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.671073 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.671366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.675294 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.703425 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.703646 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-log" containerID="cri-o://9a2a7548cbaaa4f1d1bee0519cf32e921da16f3b75a3f7500dd9499088c431d1" gracePeriod=30 Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.704179 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-httpd" containerID="cri-o://dd30c5a284e1aa303b8e49a9d345e8a2d4fc7a4567be38e3a9fb6d103bb8e43c" gracePeriod=30 Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.710571 4932 scope.go:117] "RemoveContainer" containerID="c98b72ac22bf47c15785ca8a84c929bbf86dc057b72ddce5fa68e63115a8655e" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.852425 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.866985 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-config-data\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.867153 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.867656 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pt6k\" (UniqueName: \"kubernetes.io/projected/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-kube-api-access-6pt6k\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.867742 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-logs\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.867771 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-scripts\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.867866 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.867968 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.868025 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.874185 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.891608 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.893774 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.905250 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.907509 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.907565 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.907521 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.969731 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.969823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.969936 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.969961 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-config-data\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.970027 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.970129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pt6k\" (UniqueName: \"kubernetes.io/projected/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-kube-api-access-6pt6k\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.970180 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-logs\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.970197 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-scripts\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.971956 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.972058 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-logs\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.972321 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.992095 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.999080 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-config-data\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:26 crc kubenswrapper[4932]: I0321 09:20:26.999554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-scripts\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.004382 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pt6k\" (UniqueName: \"kubernetes.io/projected/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-kube-api-access-6pt6k\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.007935 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b615cbfa-5bab-4ab9-a5dc-220a68c4331f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.045920 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"b615cbfa-5bab-4ab9-a5dc-220a68c4331f\") " pod="openstack/glance-default-external-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.072250 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.072311 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvl6d\" (UniqueName: \"kubernetes.io/projected/c75f5feb-4823-4552-a315-ea0e197ba158-kube-api-access-rvl6d\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.072341 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-config-data\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.072388 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.072512 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.072532 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75f5feb-4823-4552-a315-ea0e197ba158-logs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.072617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.174871 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.176179 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.176757 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvl6d\" (UniqueName: \"kubernetes.io/projected/c75f5feb-4823-4552-a315-ea0e197ba158-kube-api-access-rvl6d\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.176801 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-config-data\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.176860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.177208 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.177238 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75f5feb-4823-4552-a315-ea0e197ba158-logs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.177773 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75f5feb-4823-4552-a315-ea0e197ba158-logs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.186176 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.186177 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.186207 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-public-tls-certs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.186812 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.191155 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75f5feb-4823-4552-a315-ea0e197ba158-config-data\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.203885 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvl6d\" (UniqueName: \"kubernetes.io/projected/c75f5feb-4823-4552-a315-ea0e197ba158-kube-api-access-rvl6d\") pod \"watcher-api-0\" (UID: \"c75f5feb-4823-4552-a315-ea0e197ba158\") " pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.239096 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.291027 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.434590 4932 generic.go:334] "Generic (PLEG): container finished" podID="bb2449ef-1380-4083-87cd-242b41f821ac" containerID="9a2a7548cbaaa4f1d1bee0519cf32e921da16f3b75a3f7500dd9499088c431d1" exitCode=143 Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.435787 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb2449ef-1380-4083-87cd-242b41f821ac","Type":"ContainerDied","Data":"9a2a7548cbaaa4f1d1bee0519cf32e921da16f3b75a3f7500dd9499088c431d1"} Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.442659 4932 generic.go:334] "Generic (PLEG): container finished" podID="95c94c46-fec1-499c-8ae2-aab0899f87df" containerID="38ce93587878f1ae6cf2bfccda3c2a41d84bba8bf5991fe74757209456906f55" exitCode=0 Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.442737 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"95c94c46-fec1-499c-8ae2-aab0899f87df","Type":"ContainerDied","Data":"38ce93587878f1ae6cf2bfccda3c2a41d84bba8bf5991fe74757209456906f55"} Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.455290 4932 scope.go:117] "RemoveContainer" containerID="b75f965b3a33331d846a06b527ec365b61bb6ed9d6c987472e9f639d2c7424c5" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.714442 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45de1997-45e0-4e64-b3a3-4d9e9debfb39" path="/var/lib/kubelet/pods/45de1997-45e0-4e64-b3a3-4d9e9debfb39/volumes" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.715120 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a277844-3590-4db2-af83-026af5697238" path="/var/lib/kubelet/pods/7a277844-3590-4db2-af83-026af5697238/volumes" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.740558 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.741427 4932 scope.go:117] "RemoveContainer" containerID="5d94ad5b1a7c95d5cf3307c44274ad099eb4f2d593f6cb5b846c4cb6707fc081" Mar 21 09:20:27 crc kubenswrapper[4932]: E0321 09:20:27.741655 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.741840 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.871227 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.947841 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.948158 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:20:27 crc kubenswrapper[4932]: I0321 09:20:27.948973 4932 scope.go:117] "RemoveContainer" containerID="0c5a762b3cf40820ecdcf5d64c48348d7f99b83092f0dbb95a8bbf1e5761e034" Mar 21 09:20:27 crc kubenswrapper[4932]: E0321 09:20:27.949177 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.023401 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.148477 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.200258 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-combined-ca-bundle\") pod \"95c94c46-fec1-499c-8ae2-aab0899f87df\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.200379 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-config-data\") pod \"95c94c46-fec1-499c-8ae2-aab0899f87df\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.200466 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c94c46-fec1-499c-8ae2-aab0899f87df-logs\") pod \"95c94c46-fec1-499c-8ae2-aab0899f87df\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.200639 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkww9\" (UniqueName: \"kubernetes.io/projected/95c94c46-fec1-499c-8ae2-aab0899f87df-kube-api-access-qkww9\") pod \"95c94c46-fec1-499c-8ae2-aab0899f87df\" (UID: \"95c94c46-fec1-499c-8ae2-aab0899f87df\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.209482 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c94c46-fec1-499c-8ae2-aab0899f87df-kube-api-access-qkww9" (OuterVolumeSpecName: "kube-api-access-qkww9") pod "95c94c46-fec1-499c-8ae2-aab0899f87df" (UID: "95c94c46-fec1-499c-8ae2-aab0899f87df"). InnerVolumeSpecName "kube-api-access-qkww9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.212880 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c94c46-fec1-499c-8ae2-aab0899f87df-logs" (OuterVolumeSpecName: "logs") pod "95c94c46-fec1-499c-8ae2-aab0899f87df" (UID: "95c94c46-fec1-499c-8ae2-aab0899f87df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.240141 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95c94c46-fec1-499c-8ae2-aab0899f87df" (UID: "95c94c46-fec1-499c-8ae2-aab0899f87df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.243003 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.260418 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-config-data" (OuterVolumeSpecName: "config-data") pod "95c94c46-fec1-499c-8ae2-aab0899f87df" (UID: "95c94c46-fec1-499c-8ae2-aab0899f87df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.303581 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkww9\" (UniqueName: \"kubernetes.io/projected/95c94c46-fec1-499c-8ae2-aab0899f87df-kube-api-access-qkww9\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.303611 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.303623 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c94c46-fec1-499c-8ae2-aab0899f87df-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.303631 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c94c46-fec1-499c-8ae2-aab0899f87df-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.404934 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5thhm\" (UniqueName: \"kubernetes.io/projected/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-kube-api-access-5thhm\") pod \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.405029 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-scripts\") pod \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.405088 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-log-httpd\") pod \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.405168 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-config-data\") pod \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.405237 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-run-httpd\") pod \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.405274 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-combined-ca-bundle\") pod \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.405537 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" (UID: "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.405552 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" (UID: "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.405649 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-sg-core-conf-yaml\") pod \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\" (UID: \"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a\") " Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.406108 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.406126 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.408675 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-scripts" (OuterVolumeSpecName: "scripts") pod "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" (UID: "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.419298 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-kube-api-access-5thhm" (OuterVolumeSpecName: "kube-api-access-5thhm") pod "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" (UID: "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a"). InnerVolumeSpecName "kube-api-access-5thhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.467089 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" (UID: "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.489757 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c75f5feb-4823-4552-a315-ea0e197ba158","Type":"ContainerStarted","Data":"3849484824fb548aaf43d5c0030bd5dfe64e6b72c839bdf1997110a92d5f4011"} Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.489837 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c75f5feb-4823-4552-a315-ea0e197ba158","Type":"ContainerStarted","Data":"17d700b0d225cc4cd1899714003a7d5535f2a901593501a3b1b8983172557669"} Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.493103 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b615cbfa-5bab-4ab9-a5dc-220a68c4331f","Type":"ContainerStarted","Data":"f54380aac26c4c80b73288fe06cc62ee94c65bcc4dc566c3aa45cd2c430cd0f5"} Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.507656 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.507692 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5thhm\" (UniqueName: \"kubernetes.io/projected/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-kube-api-access-5thhm\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.507702 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.516364 4932 generic.go:334] "Generic (PLEG): container finished" podID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerID="4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f" exitCode=0 Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.516463 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerDied","Data":"4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f"} Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.516543 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa1e7fd4-2ff2-431b-90c8-47bd257bd74a","Type":"ContainerDied","Data":"2f8875696905fa718af8857f59b0d86f011572f1713db15d49cd84e8fab3e26d"} Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.516570 4932 scope.go:117] "RemoveContainer" containerID="0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.516810 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.549566 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" (UID: "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.554692 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"95c94c46-fec1-499c-8ae2-aab0899f87df","Type":"ContainerDied","Data":"f98deedde911b44e86d4286291f865d76e54f9a2f6dc27f7561590016087abfd"} Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.554724 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.574479 4932 generic.go:334] "Generic (PLEG): container finished" podID="bb2449ef-1380-4083-87cd-242b41f821ac" containerID="dd30c5a284e1aa303b8e49a9d345e8a2d4fc7a4567be38e3a9fb6d103bb8e43c" exitCode=0 Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.575390 4932 scope.go:117] "RemoveContainer" containerID="5d94ad5b1a7c95d5cf3307c44274ad099eb4f2d593f6cb5b846c4cb6707fc081" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.575627 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.575969 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb2449ef-1380-4083-87cd-242b41f821ac","Type":"ContainerDied","Data":"dd30c5a284e1aa303b8e49a9d345e8a2d4fc7a4567be38e3a9fb6d103bb8e43c"} Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.609285 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-config-data" (OuterVolumeSpecName: "config-data") pod "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" (UID: "aa1e7fd4-2ff2-431b-90c8-47bd257bd74a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.609533 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.609559 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.684288 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.691660 4932 scope.go:117] "RemoveContainer" containerID="fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.698632 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.751492 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.752376 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="ceilometer-notification-agent" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.752404 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="ceilometer-notification-agent" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.752435 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c94c46-fec1-499c-8ae2-aab0899f87df" containerName="watcher-applier" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.752442 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c94c46-fec1-499c-8ae2-aab0899f87df" containerName="watcher-applier" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.752467 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="proxy-httpd" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.752479 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="proxy-httpd" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.752497 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="ceilometer-central-agent" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.752504 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="ceilometer-central-agent" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.752531 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="sg-core" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.752539 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="sg-core" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.753052 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c94c46-fec1-499c-8ae2-aab0899f87df" containerName="watcher-applier" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.753084 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="proxy-httpd" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.753096 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="ceilometer-central-agent" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.753126 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="sg-core" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.753161 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" containerName="ceilometer-notification-agent" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.757461 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.760796 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.774527 4932 scope.go:117] "RemoveContainer" containerID="e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.780728 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.810144 4932 scope.go:117] "RemoveContainer" containerID="4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.856790 4932 scope.go:117] "RemoveContainer" containerID="0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.857425 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d\": container with ID starting with 0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d not found: ID does not exist" containerID="0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.857508 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d"} err="failed to get container status \"0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d\": rpc error: code = NotFound desc = could not find container \"0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d\": container with ID starting with 0c0e678ac2a715b4ef49a046b14ab9f5d65f0808630bc55dfd301b8995d5dc8d not found: ID does not exist" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.857545 4932 scope.go:117] "RemoveContainer" containerID="fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.858205 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392\": container with ID starting with fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392 not found: ID does not exist" containerID="fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.858233 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392"} err="failed to get container status \"fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392\": rpc error: code = NotFound desc = could not find container \"fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392\": container with ID starting with fd65bb30743cf6505c6cf9429fda8ec9230f88cb5599ebf0743d59b69d989392 not found: ID does not exist" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.858252 4932 scope.go:117] "RemoveContainer" containerID="e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.868278 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e\": container with ID starting with e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e not found: ID does not exist" containerID="e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.868315 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e"} err="failed to get container status \"e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e\": rpc error: code = NotFound desc = could not find container \"e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e\": container with ID starting with e630344f14e1b9eaccc59d056e88cb1a3d1110aba3e2de3f3a0278186c5f105e not found: ID does not exist" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.868341 4932 scope.go:117] "RemoveContainer" containerID="4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f" Mar 21 09:20:28 crc kubenswrapper[4932]: E0321 09:20:28.880336 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f\": container with ID starting with 4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f not found: ID does not exist" containerID="4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.880404 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f"} err="failed to get container status \"4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f\": rpc error: code = NotFound desc = could not find container \"4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f\": container with ID starting with 4c7922c185371550a4f29bd43060d5a0144aa6e3de8704eed1012a6ec48b767f not found: ID does not exist" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.880438 4932 scope.go:117] "RemoveContainer" containerID="38ce93587878f1ae6cf2bfccda3c2a41d84bba8bf5991fe74757209456906f55" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.889877 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.910998 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.931209 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c6cb87-a70a-4f04-802c-2d33d5449350-config-data\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.931272 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cb87-a70a-4f04-802c-2d33d5449350-logs\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.931302 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrh9b\" (UniqueName: \"kubernetes.io/projected/f1c6cb87-a70a-4f04-802c-2d33d5449350-kube-api-access-xrh9b\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.931394 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c6cb87-a70a-4f04-802c-2d33d5449350-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.935542 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.949836 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.953881 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.954138 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 21 09:20:28 crc kubenswrapper[4932]: I0321 09:20:28.963725 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.032905 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c6cb87-a70a-4f04-802c-2d33d5449350-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.033038 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c6cb87-a70a-4f04-802c-2d33d5449350-config-data\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.033084 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cb87-a70a-4f04-802c-2d33d5449350-logs\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.033126 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrh9b\" (UniqueName: \"kubernetes.io/projected/f1c6cb87-a70a-4f04-802c-2d33d5449350-kube-api-access-xrh9b\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.033596 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cb87-a70a-4f04-802c-2d33d5449350-logs\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.037722 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c6cb87-a70a-4f04-802c-2d33d5449350-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.044433 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c6cb87-a70a-4f04-802c-2d33d5449350-config-data\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.057577 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.063020 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrh9b\" (UniqueName: \"kubernetes.io/projected/f1c6cb87-a70a-4f04-802c-2d33d5449350-kube-api-access-xrh9b\") pod \"watcher-applier-0\" (UID: \"f1c6cb87-a70a-4f04-802c-2d33d5449350\") " pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.083708 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.134647 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-log-httpd\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.134961 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-scripts\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.135023 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-run-httpd\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.135067 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.135134 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8667h\" (UniqueName: \"kubernetes.io/projected/276ad914-019c-4ff0-8936-5ec79d750364-kube-api-access-8667h\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.135176 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-config-data\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.135195 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236071 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"bb2449ef-1380-4083-87cd-242b41f821ac\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236169 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vdbv\" (UniqueName: \"kubernetes.io/projected/bb2449ef-1380-4083-87cd-242b41f821ac-kube-api-access-6vdbv\") pod \"bb2449ef-1380-4083-87cd-242b41f821ac\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236251 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-scripts\") pod \"bb2449ef-1380-4083-87cd-242b41f821ac\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236274 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-config-data\") pod \"bb2449ef-1380-4083-87cd-242b41f821ac\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236395 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-combined-ca-bundle\") pod \"bb2449ef-1380-4083-87cd-242b41f821ac\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236428 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-internal-tls-certs\") pod \"bb2449ef-1380-4083-87cd-242b41f821ac\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236510 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-logs\") pod \"bb2449ef-1380-4083-87cd-242b41f821ac\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236536 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-httpd-run\") pod \"bb2449ef-1380-4083-87cd-242b41f821ac\" (UID: \"bb2449ef-1380-4083-87cd-242b41f821ac\") " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-run-httpd\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236874 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236933 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8667h\" (UniqueName: \"kubernetes.io/projected/276ad914-019c-4ff0-8936-5ec79d750364-kube-api-access-8667h\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236960 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-config-data\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.236978 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.237063 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-log-httpd\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.237080 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-scripts\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.243660 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-log-httpd\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.244312 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb2449ef-1380-4083-87cd-242b41f821ac" (UID: "bb2449ef-1380-4083-87cd-242b41f821ac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.245555 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-logs" (OuterVolumeSpecName: "logs") pod "bb2449ef-1380-4083-87cd-242b41f821ac" (UID: "bb2449ef-1380-4083-87cd-242b41f821ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.248412 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-run-httpd\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.251929 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-config-data\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.256821 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-scripts\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.260988 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "bb2449ef-1380-4083-87cd-242b41f821ac" (UID: "bb2449ef-1380-4083-87cd-242b41f821ac"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.262203 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-scripts" (OuterVolumeSpecName: "scripts") pod "bb2449ef-1380-4083-87cd-242b41f821ac" (UID: "bb2449ef-1380-4083-87cd-242b41f821ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.262474 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2449ef-1380-4083-87cd-242b41f821ac-kube-api-access-6vdbv" (OuterVolumeSpecName: "kube-api-access-6vdbv") pod "bb2449ef-1380-4083-87cd-242b41f821ac" (UID: "bb2449ef-1380-4083-87cd-242b41f821ac"). InnerVolumeSpecName "kube-api-access-6vdbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.274095 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.280325 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8667h\" (UniqueName: \"kubernetes.io/projected/276ad914-019c-4ff0-8936-5ec79d750364-kube-api-access-8667h\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.280681 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.320507 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb2449ef-1380-4083-87cd-242b41f821ac" (UID: "bb2449ef-1380-4083-87cd-242b41f821ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.339104 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.339136 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb2449ef-1380-4083-87cd-242b41f821ac-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.339176 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.339187 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vdbv\" (UniqueName: \"kubernetes.io/projected/bb2449ef-1380-4083-87cd-242b41f821ac-kube-api-access-6vdbv\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.339197 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.339205 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.366614 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.379275 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-config-data" (OuterVolumeSpecName: "config-data") pod "bb2449ef-1380-4083-87cd-242b41f821ac" (UID: "bb2449ef-1380-4083-87cd-242b41f821ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.420543 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb2449ef-1380-4083-87cd-242b41f821ac" (UID: "bb2449ef-1380-4083-87cd-242b41f821ac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.424149 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.440561 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.440593 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb2449ef-1380-4083-87cd-242b41f821ac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.440605 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.626945 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 21 09:20:29 crc kubenswrapper[4932]: W0321 09:20:29.674615 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1c6cb87_a70a_4f04_802c_2d33d5449350.slice/crio-6e551feed4017114853ab1af1f1df91a0c90270577f4d692fdc911d02b65a6cf WatchSource:0}: Error finding container 6e551feed4017114853ab1af1f1df91a0c90270577f4d692fdc911d02b65a6cf: Status 404 returned error can't find the container with id 6e551feed4017114853ab1af1f1df91a0c90270577f4d692fdc911d02b65a6cf Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.674993 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c75f5feb-4823-4552-a315-ea0e197ba158","Type":"ContainerStarted","Data":"ae821ea05e1e28efb2efc2a50edcc62bb618cd89df6cec9d868b14950447cf52"} Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.676598 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.722514 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.722486481 podStartE2EDuration="3.722486481s" podCreationTimestamp="2026-03-21 09:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:29.702717937 +0000 UTC m=+1333.297916206" watchObservedRunningTime="2026-03-21 09:20:29.722486481 +0000 UTC m=+1333.317684770" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.722799 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.737396 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c94c46-fec1-499c-8ae2-aab0899f87df" path="/var/lib/kubelet/pods/95c94c46-fec1-499c-8ae2-aab0899f87df/volumes" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.739110 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa1e7fd4-2ff2-431b-90c8-47bd257bd74a" path="/var/lib/kubelet/pods/aa1e7fd4-2ff2-431b-90c8-47bd257bd74a/volumes" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.740290 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bb2449ef-1380-4083-87cd-242b41f821ac","Type":"ContainerDied","Data":"896b47630cde264a63ed20d2bf00f00d6bb93ae02fa2acad697d55ece8b4f751"} Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.740342 4932 scope.go:117] "RemoveContainer" containerID="dd30c5a284e1aa303b8e49a9d345e8a2d4fc7a4567be38e3a9fb6d103bb8e43c" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.758574 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b615cbfa-5bab-4ab9-a5dc-220a68c4331f","Type":"ContainerStarted","Data":"c5c13c6a7efdb4fe3561acf2f67c8b879869e489ce67ff5a3568ec84741cac1c"} Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.854763 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.865447 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.878604 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:20:29 crc kubenswrapper[4932]: E0321 09:20:29.880212 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-log" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.880234 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-log" Mar 21 09:20:29 crc kubenswrapper[4932]: E0321 09:20:29.880252 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-httpd" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.880264 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-httpd" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.880560 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-log" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.880578 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-httpd" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.887313 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.899538 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.902568 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.916684 4932 scope.go:117] "RemoveContainer" containerID="9a2a7548cbaaa4f1d1bee0519cf32e921da16f3b75a3f7500dd9499088c431d1" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.920871 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.964513 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0a90781-196e-452d-9175-b390d33a495c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.964599 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.964629 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.964675 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.964758 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0a90781-196e-452d-9175-b390d33a495c-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.964802 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65lb\" (UniqueName: \"kubernetes.io/projected/b0a90781-196e-452d-9175-b390d33a495c-kube-api-access-x65lb\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.964832 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:29 crc kubenswrapper[4932]: I0321 09:20:29.964871 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.012711 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jtrd4"] Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.014542 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.018085 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.018329 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zzb2x" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.018496 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.037128 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jtrd4"] Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066590 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg87x\" (UniqueName: \"kubernetes.io/projected/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-kube-api-access-zg87x\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066642 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0a90781-196e-452d-9175-b390d33a495c-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x65lb\" (UniqueName: \"kubernetes.io/projected/b0a90781-196e-452d-9175-b390d33a495c-kube-api-access-x65lb\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066724 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066802 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-scripts\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066864 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066887 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0a90781-196e-452d-9175-b390d33a495c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066932 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.066957 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.067005 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.067052 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0a90781-196e-452d-9175-b390d33a495c-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.068020 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0a90781-196e-452d-9175-b390d33a495c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.068249 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.082003 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.085955 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.089673 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.095237 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a90781-196e-452d-9175-b390d33a495c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.104407 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65lb\" (UniqueName: \"kubernetes.io/projected/b0a90781-196e-452d-9175-b390d33a495c-kube-api-access-x65lb\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.168651 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0a90781-196e-452d-9175-b390d33a495c\") " pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.178227 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg87x\" (UniqueName: \"kubernetes.io/projected/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-kube-api-access-zg87x\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.178658 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.178755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-scripts\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.178833 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.188644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.199013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.199140 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-scripts\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.201532 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg87x\" (UniqueName: \"kubernetes.io/projected/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-kube-api-access-zg87x\") pod \"nova-cell0-conductor-db-sync-jtrd4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.320721 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.324145 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.356187 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.788781 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"f1c6cb87-a70a-4f04-802c-2d33d5449350","Type":"ContainerStarted","Data":"cc5e63134b3a93e8e8abbfb31c3e411d9d049fea942ade6dd53bcca694427608"} Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.788999 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"f1c6cb87-a70a-4f04-802c-2d33d5449350","Type":"ContainerStarted","Data":"6e551feed4017114853ab1af1f1df91a0c90270577f4d692fdc911d02b65a6cf"} Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.792075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b615cbfa-5bab-4ab9-a5dc-220a68c4331f","Type":"ContainerStarted","Data":"326231d7ad807be1dba9d67a204843b323944eb87d0ee3d259aee9aedc526b4b"} Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.794232 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerStarted","Data":"551433dea649237b7b5c036ac46299f3e5efcb805c7bac2c3eeb3540d860887a"} Mar 21 09:20:30 crc kubenswrapper[4932]: I0321 09:20:30.838232 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.838206517 podStartE2EDuration="2.838206517s" podCreationTimestamp="2026-03-21 09:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:30.82215151 +0000 UTC m=+1334.417349779" watchObservedRunningTime="2026-03-21 09:20:30.838206517 +0000 UTC m=+1334.433404786" Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.111060 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.111036548 podStartE2EDuration="5.111036548s" podCreationTimestamp="2026-03-21 09:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:30.858440155 +0000 UTC m=+1334.453638424" watchObservedRunningTime="2026-03-21 09:20:31.111036548 +0000 UTC m=+1334.706234817" Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.121421 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jtrd4"] Mar 21 09:20:31 crc kubenswrapper[4932]: W0321 09:20:31.128714 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c282ef1_ca4b_4eb8_9acb_ed95a92625a4.slice/crio-cb5f9395e676e2c0bdb4ccac375ee6eb363196a496eee219d2ab56d859ba1187 WatchSource:0}: Error finding container cb5f9395e676e2c0bdb4ccac375ee6eb363196a496eee219d2ab56d859ba1187: Status 404 returned error can't find the container with id cb5f9395e676e2c0bdb4ccac375ee6eb363196a496eee219d2ab56d859ba1187 Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.152942 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 21 09:20:31 crc kubenswrapper[4932]: W0321 09:20:31.153310 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0a90781_196e_452d_9175_b390d33a495c.slice/crio-b71d9db6b1b0cbb6cde12fc09daea0519853a535d05bb8626df20a13c8dc9b34 WatchSource:0}: Error finding container b71d9db6b1b0cbb6cde12fc09daea0519853a535d05bb8626df20a13c8dc9b34: Status 404 returned error can't find the container with id b71d9db6b1b0cbb6cde12fc09daea0519853a535d05bb8626df20a13c8dc9b34 Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.720708 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" path="/var/lib/kubelet/pods/bb2449ef-1380-4083-87cd-242b41f821ac/volumes" Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.810489 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a90781-196e-452d-9175-b390d33a495c","Type":"ContainerStarted","Data":"b71d9db6b1b0cbb6cde12fc09daea0519853a535d05bb8626df20a13c8dc9b34"} Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.813527 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" event={"ID":"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4","Type":"ContainerStarted","Data":"cb5f9395e676e2c0bdb4ccac375ee6eb363196a496eee219d2ab56d859ba1187"} Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.822459 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.823790 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerStarted","Data":"339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8"} Mar 21 09:20:31 crc kubenswrapper[4932]: I0321 09:20:31.823849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerStarted","Data":"e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba"} Mar 21 09:20:32 crc kubenswrapper[4932]: I0321 09:20:32.244639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 21 09:20:32 crc kubenswrapper[4932]: I0321 09:20:32.844141 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a90781-196e-452d-9175-b390d33a495c","Type":"ContainerStarted","Data":"604d6079529a1de20dba10b583a6ed808dc19d56b7855f6065d6f42ff4daa1f6"} Mar 21 09:20:32 crc kubenswrapper[4932]: I0321 09:20:32.851309 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:20:32 crc kubenswrapper[4932]: I0321 09:20:32.852227 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerStarted","Data":"f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c"} Mar 21 09:20:33 crc kubenswrapper[4932]: I0321 09:20:33.235496 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 21 09:20:33 crc kubenswrapper[4932]: I0321 09:20:33.867768 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0a90781-196e-452d-9175-b390d33a495c","Type":"ContainerStarted","Data":"ce55b44d7e88040da15f813605439f82d71d9439208198b1e0b94e080454978f"} Mar 21 09:20:34 crc kubenswrapper[4932]: I0321 09:20:34.084792 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 21 09:20:34 crc kubenswrapper[4932]: I0321 09:20:34.895998 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerStarted","Data":"e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749"} Mar 21 09:20:34 crc kubenswrapper[4932]: I0321 09:20:34.896831 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 21 09:20:34 crc kubenswrapper[4932]: I0321 09:20:34.942143 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.942115741 podStartE2EDuration="5.942115741s" podCreationTimestamp="2026-03-21 09:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:33.922488468 +0000 UTC m=+1337.517686747" watchObservedRunningTime="2026-03-21 09:20:34.942115741 +0000 UTC m=+1338.537314010" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.242013 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.267168 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.291405 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.292481 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.297062 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.728721252 podStartE2EDuration="9.297031681s" podCreationTimestamp="2026-03-21 09:20:28 +0000 UTC" firstStartedPulling="2026-03-21 09:20:30.355079378 +0000 UTC m=+1333.950277647" lastFinishedPulling="2026-03-21 09:20:33.923389807 +0000 UTC m=+1337.518588076" observedRunningTime="2026-03-21 09:20:34.934366907 +0000 UTC m=+1338.529565176" watchObservedRunningTime="2026-03-21 09:20:37.297031681 +0000 UTC m=+1340.892229950" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.332769 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.349323 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.932734 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.932777 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 21 09:20:37 crc kubenswrapper[4932]: I0321 09:20:37.940932 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 21 09:20:39 crc kubenswrapper[4932]: I0321 09:20:39.084379 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 21 09:20:39 crc kubenswrapper[4932]: I0321 09:20:39.120230 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.010815 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.154805 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.154926 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.286264 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.321781 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.321832 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.375147 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.387340 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.702257 4932 scope.go:117] "RemoveContainer" containerID="0c5a762b3cf40820ecdcf5d64c48348d7f99b83092f0dbb95a8bbf1e5761e034" Mar 21 09:20:40 crc kubenswrapper[4932]: E0321 09:20:40.702492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.986413 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:40 crc kubenswrapper[4932]: I0321 09:20:40.986463 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:42 crc kubenswrapper[4932]: I0321 09:20:42.705857 4932 scope.go:117] "RemoveContainer" containerID="5d94ad5b1a7c95d5cf3307c44274ad099eb4f2d593f6cb5b846c4cb6707fc081" Mar 21 09:20:43 crc kubenswrapper[4932]: I0321 09:20:43.047038 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47"} Mar 21 09:20:43 crc kubenswrapper[4932]: I0321 09:20:43.049623 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" event={"ID":"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4","Type":"ContainerStarted","Data":"b5c105d3d2060015e4efc7f98e2970afdf3451748f675a044452fb9c7c0b0a87"} Mar 21 09:20:43 crc kubenswrapper[4932]: I0321 09:20:43.097286 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" podStartSLOduration=2.616848504 podStartE2EDuration="14.097264676s" podCreationTimestamp="2026-03-21 09:20:29 +0000 UTC" firstStartedPulling="2026-03-21 09:20:31.131549616 +0000 UTC m=+1334.726747885" lastFinishedPulling="2026-03-21 09:20:42.611965788 +0000 UTC m=+1346.207164057" observedRunningTime="2026-03-21 09:20:43.08790402 +0000 UTC m=+1346.683102289" watchObservedRunningTime="2026-03-21 09:20:43.097264676 +0000 UTC m=+1346.692462945" Mar 21 09:20:43 crc kubenswrapper[4932]: I0321 09:20:43.125445 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:43 crc kubenswrapper[4932]: I0321 09:20:43.125549 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 09:20:43 crc kubenswrapper[4932]: I0321 09:20:43.131494 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 21 09:20:45 crc kubenswrapper[4932]: I0321 09:20:45.506731 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:45 crc kubenswrapper[4932]: I0321 09:20:45.507358 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="ceilometer-central-agent" containerID="cri-o://339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8" gracePeriod=30 Mar 21 09:20:45 crc kubenswrapper[4932]: I0321 09:20:45.507447 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="sg-core" containerID="cri-o://f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c" gracePeriod=30 Mar 21 09:20:45 crc kubenswrapper[4932]: I0321 09:20:45.507487 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="proxy-httpd" containerID="cri-o://e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749" gracePeriod=30 Mar 21 09:20:45 crc kubenswrapper[4932]: I0321 09:20:45.507448 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="ceilometer-notification-agent" containerID="cri-o://e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba" gracePeriod=30 Mar 21 09:20:45 crc kubenswrapper[4932]: I0321 09:20:45.523905 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 21 09:20:46 crc kubenswrapper[4932]: I0321 09:20:46.079995 4932 generic.go:334] "Generic (PLEG): container finished" podID="276ad914-019c-4ff0-8936-5ec79d750364" containerID="e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749" exitCode=0 Mar 21 09:20:46 crc kubenswrapper[4932]: I0321 09:20:46.080340 4932 generic.go:334] "Generic (PLEG): container finished" podID="276ad914-019c-4ff0-8936-5ec79d750364" containerID="f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c" exitCode=2 Mar 21 09:20:46 crc kubenswrapper[4932]: I0321 09:20:46.080448 4932 generic.go:334] "Generic (PLEG): container finished" podID="276ad914-019c-4ff0-8936-5ec79d750364" containerID="339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8" exitCode=0 Mar 21 09:20:46 crc kubenswrapper[4932]: I0321 09:20:46.080068 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerDied","Data":"e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749"} Mar 21 09:20:46 crc kubenswrapper[4932]: I0321 09:20:46.080616 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerDied","Data":"f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c"} Mar 21 09:20:46 crc kubenswrapper[4932]: I0321 09:20:46.080714 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerDied","Data":"339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8"} Mar 21 09:20:47 crc kubenswrapper[4932]: I0321 09:20:47.741387 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:20:47 crc kubenswrapper[4932]: I0321 09:20:47.741725 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.052213 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.120761 4932 generic.go:334] "Generic (PLEG): container finished" podID="276ad914-019c-4ff0-8936-5ec79d750364" containerID="e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba" exitCode=0 Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.120819 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerDied","Data":"e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba"} Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.120857 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"276ad914-019c-4ff0-8936-5ec79d750364","Type":"ContainerDied","Data":"551433dea649237b7b5c036ac46299f3e5efcb805c7bac2c3eeb3540d860887a"} Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.120868 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.120880 4932 scope.go:117] "RemoveContainer" containerID="e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.136914 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8667h\" (UniqueName: \"kubernetes.io/projected/276ad914-019c-4ff0-8936-5ec79d750364-kube-api-access-8667h\") pod \"276ad914-019c-4ff0-8936-5ec79d750364\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.136985 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-config-data\") pod \"276ad914-019c-4ff0-8936-5ec79d750364\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.138139 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-scripts\") pod \"276ad914-019c-4ff0-8936-5ec79d750364\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.138258 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-sg-core-conf-yaml\") pod \"276ad914-019c-4ff0-8936-5ec79d750364\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.138391 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-run-httpd\") pod \"276ad914-019c-4ff0-8936-5ec79d750364\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.138467 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-combined-ca-bundle\") pod \"276ad914-019c-4ff0-8936-5ec79d750364\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.138503 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-log-httpd\") pod \"276ad914-019c-4ff0-8936-5ec79d750364\" (UID: \"276ad914-019c-4ff0-8936-5ec79d750364\") " Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.140016 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "276ad914-019c-4ff0-8936-5ec79d750364" (UID: "276ad914-019c-4ff0-8936-5ec79d750364"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.140177 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "276ad914-019c-4ff0-8936-5ec79d750364" (UID: "276ad914-019c-4ff0-8936-5ec79d750364"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.145575 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/276ad914-019c-4ff0-8936-5ec79d750364-kube-api-access-8667h" (OuterVolumeSpecName: "kube-api-access-8667h") pod "276ad914-019c-4ff0-8936-5ec79d750364" (UID: "276ad914-019c-4ff0-8936-5ec79d750364"). InnerVolumeSpecName "kube-api-access-8667h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.155953 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-scripts" (OuterVolumeSpecName: "scripts") pod "276ad914-019c-4ff0-8936-5ec79d750364" (UID: "276ad914-019c-4ff0-8936-5ec79d750364"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.182402 4932 scope.go:117] "RemoveContainer" containerID="f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.208137 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "276ad914-019c-4ff0-8936-5ec79d750364" (UID: "276ad914-019c-4ff0-8936-5ec79d750364"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.241643 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.241927 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.241942 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/276ad914-019c-4ff0-8936-5ec79d750364-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.241955 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8667h\" (UniqueName: \"kubernetes.io/projected/276ad914-019c-4ff0-8936-5ec79d750364-kube-api-access-8667h\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.241966 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.271066 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "276ad914-019c-4ff0-8936-5ec79d750364" (UID: "276ad914-019c-4ff0-8936-5ec79d750364"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.284760 4932 scope.go:117] "RemoveContainer" containerID="e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.290041 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-config-data" (OuterVolumeSpecName: "config-data") pod "276ad914-019c-4ff0-8936-5ec79d750364" (UID: "276ad914-019c-4ff0-8936-5ec79d750364"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.309414 4932 scope.go:117] "RemoveContainer" containerID="339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.336157 4932 scope.go:117] "RemoveContainer" containerID="e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749" Mar 21 09:20:49 crc kubenswrapper[4932]: E0321 09:20:49.336944 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749\": container with ID starting with e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749 not found: ID does not exist" containerID="e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.337051 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749"} err="failed to get container status \"e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749\": rpc error: code = NotFound desc = could not find container \"e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749\": container with ID starting with e23d32bc690a2202df1e8783c4a6641eba14ad3e05fb96e7a33038d1dfc22749 not found: ID does not exist" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.337133 4932 scope.go:117] "RemoveContainer" containerID="f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c" Mar 21 09:20:49 crc kubenswrapper[4932]: E0321 09:20:49.340296 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c\": container with ID starting with f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c not found: ID does not exist" containerID="f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.340829 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c"} err="failed to get container status \"f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c\": rpc error: code = NotFound desc = could not find container \"f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c\": container with ID starting with f17c8c81a052e3b08534b09eeb2de3077f577d8a9f391b3896be6aa8552fff0c not found: ID does not exist" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.341308 4932 scope.go:117] "RemoveContainer" containerID="e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba" Mar 21 09:20:49 crc kubenswrapper[4932]: E0321 09:20:49.341912 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba\": container with ID starting with e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba not found: ID does not exist" containerID="e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.342063 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba"} err="failed to get container status \"e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba\": rpc error: code = NotFound desc = could not find container \"e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba\": container with ID starting with e56070a65c9c45898cfbd7827f17a9b6162a4a1aaf0d3be939c69ea7100da9ba not found: ID does not exist" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.342249 4932 scope.go:117] "RemoveContainer" containerID="339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8" Mar 21 09:20:49 crc kubenswrapper[4932]: E0321 09:20:49.342847 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8\": container with ID starting with 339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8 not found: ID does not exist" containerID="339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.342896 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8"} err="failed to get container status \"339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8\": rpc error: code = NotFound desc = could not find container \"339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8\": container with ID starting with 339253e7cff13054d487b6fd16a3f139cf1f30f06d2a0380f2160d5672e173c8 not found: ID does not exist" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.343981 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.344002 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/276ad914-019c-4ff0-8936-5ec79d750364-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.463097 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.475493 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.487664 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:49 crc kubenswrapper[4932]: E0321 09:20:49.492527 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="ceilometer-central-agent" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.492573 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="ceilometer-central-agent" Mar 21 09:20:49 crc kubenswrapper[4932]: E0321 09:20:49.492631 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="ceilometer-notification-agent" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.492640 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="ceilometer-notification-agent" Mar 21 09:20:49 crc kubenswrapper[4932]: E0321 09:20:49.492672 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="proxy-httpd" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.492681 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="proxy-httpd" Mar 21 09:20:49 crc kubenswrapper[4932]: E0321 09:20:49.492699 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="sg-core" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.492707 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="sg-core" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.493108 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="sg-core" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.493142 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="ceilometer-notification-agent" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.493158 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="ceilometer-central-agent" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.493170 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="276ad914-019c-4ff0-8936-5ec79d750364" containerName="proxy-httpd" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.495366 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.498068 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.498247 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.505131 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.548444 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-run-httpd\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.548516 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.548589 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-config-data\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.548613 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vnjl\" (UniqueName: \"kubernetes.io/projected/f15063b8-98f3-4a4b-94b3-4db808f7ad73-kube-api-access-7vnjl\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.548635 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.548694 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-log-httpd\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.548744 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-scripts\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.653753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-run-httpd\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.654049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.654211 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-config-data\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.654291 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vnjl\" (UniqueName: \"kubernetes.io/projected/f15063b8-98f3-4a4b-94b3-4db808f7ad73-kube-api-access-7vnjl\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.654380 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.654554 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-log-httpd\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.654657 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-scripts\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.663199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-run-httpd\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.663932 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-log-httpd\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.667638 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-scripts\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.668561 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-config-data\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.675227 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.676243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.688242 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vnjl\" (UniqueName: \"kubernetes.io/projected/f15063b8-98f3-4a4b-94b3-4db808f7ad73-kube-api-access-7vnjl\") pod \"ceilometer-0\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " pod="openstack/ceilometer-0" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.734019 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="276ad914-019c-4ff0-8936-5ec79d750364" path="/var/lib/kubelet/pods/276ad914-019c-4ff0-8936-5ec79d750364/volumes" Mar 21 09:20:49 crc kubenswrapper[4932]: I0321 09:20:49.815099 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:20:50 crc kubenswrapper[4932]: I0321 09:20:50.324266 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:50 crc kubenswrapper[4932]: W0321 09:20:50.328169 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf15063b8_98f3_4a4b_94b3_4db808f7ad73.slice/crio-561faa8fbbcc5dc34e88defd87906126e891902bafc8ad285364375053dd3a64 WatchSource:0}: Error finding container 561faa8fbbcc5dc34e88defd87906126e891902bafc8ad285364375053dd3a64: Status 404 returned error can't find the container with id 561faa8fbbcc5dc34e88defd87906126e891902bafc8ad285364375053dd3a64 Mar 21 09:20:50 crc kubenswrapper[4932]: E0321 09:20:50.861306 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 21 09:20:50 crc kubenswrapper[4932]: E0321 09:20:50.863077 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 21 09:20:50 crc kubenswrapper[4932]: E0321 09:20:50.864399 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 21 09:20:50 crc kubenswrapper[4932]: E0321 09:20:50.864442 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:20:51 crc kubenswrapper[4932]: I0321 09:20:51.153223 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerStarted","Data":"561faa8fbbcc5dc34e88defd87906126e891902bafc8ad285364375053dd3a64"} Mar 21 09:20:52 crc kubenswrapper[4932]: I0321 09:20:52.177224 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerStarted","Data":"24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729"} Mar 21 09:20:52 crc kubenswrapper[4932]: I0321 09:20:52.177725 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerStarted","Data":"9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc"} Mar 21 09:20:52 crc kubenswrapper[4932]: I0321 09:20:52.702607 4932 scope.go:117] "RemoveContainer" containerID="0c5a762b3cf40820ecdcf5d64c48348d7f99b83092f0dbb95a8bbf1e5761e034" Mar 21 09:20:53 crc kubenswrapper[4932]: I0321 09:20:53.189551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerStarted","Data":"ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453"} Mar 21 09:20:53 crc kubenswrapper[4932]: I0321 09:20:53.192912 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07"} Mar 21 09:20:53 crc kubenswrapper[4932]: I0321 09:20:53.195536 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47" exitCode=1 Mar 21 09:20:53 crc kubenswrapper[4932]: I0321 09:20:53.195581 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47"} Mar 21 09:20:53 crc kubenswrapper[4932]: I0321 09:20:53.195638 4932 scope.go:117] "RemoveContainer" containerID="5d94ad5b1a7c95d5cf3307c44274ad099eb4f2d593f6cb5b846c4cb6707fc081" Mar 21 09:20:53 crc kubenswrapper[4932]: I0321 09:20:53.196220 4932 scope.go:117] "RemoveContainer" containerID="0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47" Mar 21 09:20:53 crc kubenswrapper[4932]: E0321 09:20:53.196507 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:20:53 crc kubenswrapper[4932]: I0321 09:20:53.968301 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.066816 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.153881 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-config-data\") pod \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.154040 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-custom-prometheus-ca\") pod \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.154122 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-logs\") pod \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.154189 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-277bh\" (UniqueName: \"kubernetes.io/projected/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-kube-api-access-277bh\") pod \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.154334 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-combined-ca-bundle\") pod \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\" (UID: \"7d4c5dc9-4c73-486d-9427-2a5f07da9e89\") " Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.154823 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-logs" (OuterVolumeSpecName: "logs") pod "7d4c5dc9-4c73-486d-9427-2a5f07da9e89" (UID: "7d4c5dc9-4c73-486d-9427-2a5f07da9e89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.155039 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.158601 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-kube-api-access-277bh" (OuterVolumeSpecName: "kube-api-access-277bh") pod "7d4c5dc9-4c73-486d-9427-2a5f07da9e89" (UID: "7d4c5dc9-4c73-486d-9427-2a5f07da9e89"). InnerVolumeSpecName "kube-api-access-277bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.187921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7d4c5dc9-4c73-486d-9427-2a5f07da9e89" (UID: "7d4c5dc9-4c73-486d-9427-2a5f07da9e89"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.198599 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d4c5dc9-4c73-486d-9427-2a5f07da9e89" (UID: "7d4c5dc9-4c73-486d-9427-2a5f07da9e89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.220613 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerStarted","Data":"00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1"} Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.221286 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="ceilometer-central-agent" containerID="cri-o://9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc" gracePeriod=30 Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.221463 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="sg-core" containerID="cri-o://ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453" gracePeriod=30 Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.221525 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="ceilometer-notification-agent" containerID="cri-o://24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729" gracePeriod=30 Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.221608 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="proxy-httpd" containerID="cri-o://00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1" gracePeriod=30 Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.221303 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.243805 4932 generic.go:334] "Generic (PLEG): container finished" podID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerID="f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f" exitCode=137 Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.243915 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerDied","Data":"f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f"} Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.243948 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7d4c5dc9-4c73-486d-9427-2a5f07da9e89","Type":"ContainerDied","Data":"50cbe2908b481732241e69d36c2de581f4209b27430c962ad06a77e862d6199f"} Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.243964 4932 scope.go:117] "RemoveContainer" containerID="f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.243997 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.244522 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-config-data" (OuterVolumeSpecName: "config-data") pod "7d4c5dc9-4c73-486d-9427-2a5f07da9e89" (UID: "7d4c5dc9-4c73-486d-9427-2a5f07da9e89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.254879 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.625885765 podStartE2EDuration="5.254842036s" podCreationTimestamp="2026-03-21 09:20:49 +0000 UTC" firstStartedPulling="2026-03-21 09:20:50.331001798 +0000 UTC m=+1353.926200067" lastFinishedPulling="2026-03-21 09:20:53.959958069 +0000 UTC m=+1357.555156338" observedRunningTime="2026-03-21 09:20:54.244281423 +0000 UTC m=+1357.839479692" watchObservedRunningTime="2026-03-21 09:20:54.254842036 +0000 UTC m=+1357.850040305" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.256717 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.257133 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.257142 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.257151 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-277bh\" (UniqueName: \"kubernetes.io/projected/7d4c5dc9-4c73-486d-9427-2a5f07da9e89-kube-api-access-277bh\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.330857 4932 scope.go:117] "RemoveContainer" containerID="5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.429027 4932 scope.go:117] "RemoveContainer" containerID="f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f" Mar 21 09:20:54 crc kubenswrapper[4932]: E0321 09:20:54.429633 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f\": container with ID starting with f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f not found: ID does not exist" containerID="f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.429684 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f"} err="failed to get container status \"f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f\": rpc error: code = NotFound desc = could not find container \"f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f\": container with ID starting with f38acd2a7b13f0e5324486668816fa295296e7cfdca172bcd3ed3ddf4e5e9a3f not found: ID does not exist" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.429712 4932 scope.go:117] "RemoveContainer" containerID="5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b" Mar 21 09:20:54 crc kubenswrapper[4932]: E0321 09:20:54.431298 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b\": container with ID starting with 5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b not found: ID does not exist" containerID="5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.431332 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b"} err="failed to get container status \"5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b\": rpc error: code = NotFound desc = could not find container \"5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b\": container with ID starting with 5ffc14da4e2558ba89482895060a1565ec1137b6c78c1db35df9eb65e4aa1c2b not found: ID does not exist" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.582099 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.595554 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.613746 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:20:54 crc kubenswrapper[4932]: E0321 09:20:54.614232 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.614254 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:20:54 crc kubenswrapper[4932]: E0321 09:20:54.614279 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.614286 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.614506 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.614531 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.614543 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.615303 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.622525 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.634693 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.673237 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.673294 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce099b91-a4a0-4e8b-887d-6680f656ad71-logs\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.673319 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52r6\" (UniqueName: \"kubernetes.io/projected/ce099b91-a4a0-4e8b-887d-6680f656ad71-kube-api-access-g52r6\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.673355 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.673414 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.775203 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.775280 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce099b91-a4a0-4e8b-887d-6680f656ad71-logs\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.775299 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g52r6\" (UniqueName: \"kubernetes.io/projected/ce099b91-a4a0-4e8b-887d-6680f656ad71-kube-api-access-g52r6\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.775318 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.775401 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.777603 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce099b91-a4a0-4e8b-887d-6680f656ad71-logs\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.780825 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.781073 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.783041 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce099b91-a4a0-4e8b-887d-6680f656ad71-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.817410 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g52r6\" (UniqueName: \"kubernetes.io/projected/ce099b91-a4a0-4e8b-887d-6680f656ad71-kube-api-access-g52r6\") pod \"watcher-decision-engine-0\" (UID: \"ce099b91-a4a0-4e8b-887d-6680f656ad71\") " pod="openstack/watcher-decision-engine-0" Mar 21 09:20:54 crc kubenswrapper[4932]: I0321 09:20:54.948408 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 21 09:20:55 crc kubenswrapper[4932]: I0321 09:20:55.275724 4932 generic.go:334] "Generic (PLEG): container finished" podID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerID="ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453" exitCode=2 Mar 21 09:20:55 crc kubenswrapper[4932]: I0321 09:20:55.276058 4932 generic.go:334] "Generic (PLEG): container finished" podID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerID="24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729" exitCode=0 Mar 21 09:20:55 crc kubenswrapper[4932]: I0321 09:20:55.276118 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerDied","Data":"ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453"} Mar 21 09:20:55 crc kubenswrapper[4932]: I0321 09:20:55.276154 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerDied","Data":"24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729"} Mar 21 09:20:55 crc kubenswrapper[4932]: I0321 09:20:55.406455 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 21 09:20:55 crc kubenswrapper[4932]: W0321 09:20:55.411927 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce099b91_a4a0_4e8b_887d_6680f656ad71.slice/crio-f2beecdb2a3b43f67efb18353a3a61be9d2ae2b8eb5bf500820c94a6b8f3ab0d WatchSource:0}: Error finding container f2beecdb2a3b43f67efb18353a3a61be9d2ae2b8eb5bf500820c94a6b8f3ab0d: Status 404 returned error can't find the container with id f2beecdb2a3b43f67efb18353a3a61be9d2ae2b8eb5bf500820c94a6b8f3ab0d Mar 21 09:20:55 crc kubenswrapper[4932]: I0321 09:20:55.712581 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" path="/var/lib/kubelet/pods/7d4c5dc9-4c73-486d-9427-2a5f07da9e89/volumes" Mar 21 09:20:56 crc kubenswrapper[4932]: I0321 09:20:56.289545 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ce099b91-a4a0-4e8b-887d-6680f656ad71","Type":"ContainerStarted","Data":"44f3d96a2dac055451ac28534d51412b9807fd68bf64111d43df5f17863ebcaf"} Mar 21 09:20:56 crc kubenswrapper[4932]: I0321 09:20:56.289808 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ce099b91-a4a0-4e8b-887d-6680f656ad71","Type":"ContainerStarted","Data":"f2beecdb2a3b43f67efb18353a3a61be9d2ae2b8eb5bf500820c94a6b8f3ab0d"} Mar 21 09:20:56 crc kubenswrapper[4932]: I0321 09:20:56.314880 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.314847517 podStartE2EDuration="2.314847517s" podCreationTimestamp="2026-03-21 09:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:20:56.306274638 +0000 UTC m=+1359.901472907" watchObservedRunningTime="2026-03-21 09:20:56.314847517 +0000 UTC m=+1359.910045786" Mar 21 09:20:57 crc kubenswrapper[4932]: I0321 09:20:57.740770 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:20:57 crc kubenswrapper[4932]: I0321 09:20:57.741533 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:20:57 crc kubenswrapper[4932]: I0321 09:20:57.742186 4932 scope.go:117] "RemoveContainer" containerID="0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47" Mar 21 09:20:57 crc kubenswrapper[4932]: E0321 09:20:57.742407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:20:57 crc kubenswrapper[4932]: I0321 09:20:57.947600 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:20:57 crc kubenswrapper[4932]: I0321 09:20:57.947870 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:20:58 crc kubenswrapper[4932]: I0321 09:20:58.311180 4932 generic.go:334] "Generic (PLEG): container finished" podID="3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" containerID="b5c105d3d2060015e4efc7f98e2970afdf3451748f675a044452fb9c7c0b0a87" exitCode=0 Mar 21 09:20:58 crc kubenswrapper[4932]: I0321 09:20:58.311224 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" event={"ID":"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4","Type":"ContainerDied","Data":"b5c105d3d2060015e4efc7f98e2970afdf3451748f675a044452fb9c7c0b0a87"} Mar 21 09:20:58 crc kubenswrapper[4932]: I0321 09:20:58.899791 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 09:20:58 crc kubenswrapper[4932]: I0321 09:20:58.900285 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="bb2449ef-1380-4083-87cd-242b41f821ac" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.170:9292/healthcheck\": dial tcp 10.217.0.170:9292: i/o timeout" Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.685737 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.779316 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-combined-ca-bundle\") pod \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.779442 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data\") pod \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.779484 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-scripts\") pod \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.779624 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg87x\" (UniqueName: \"kubernetes.io/projected/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-kube-api-access-zg87x\") pod \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.786052 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-scripts" (OuterVolumeSpecName: "scripts") pod "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" (UID: "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.794963 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-kube-api-access-zg87x" (OuterVolumeSpecName: "kube-api-access-zg87x") pod "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" (UID: "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4"). InnerVolumeSpecName "kube-api-access-zg87x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:20:59 crc kubenswrapper[4932]: E0321 09:20:59.810277 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data podName:3c282ef1-ca4b-4eb8-9acb-ed95a92625a4 nodeName:}" failed. No retries permitted until 2026-03-21 09:21:00.310230154 +0000 UTC m=+1363.905428423 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data") pod "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" (UID: "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4") : error deleting /var/lib/kubelet/pods/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4/volume-subpaths: remove /var/lib/kubelet/pods/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4/volume-subpaths: no such file or directory Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.813259 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" (UID: "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.881745 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.881781 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg87x\" (UniqueName: \"kubernetes.io/projected/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-kube-api-access-zg87x\") on node \"crc\" DevicePath \"\"" Mar 21 09:20:59 crc kubenswrapper[4932]: I0321 09:20:59.881791 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.330370 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" event={"ID":"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4","Type":"ContainerDied","Data":"cb5f9395e676e2c0bdb4ccac375ee6eb363196a496eee219d2ab56d859ba1187"} Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.330748 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb5f9395e676e2c0bdb4ccac375ee6eb363196a496eee219d2ab56d859ba1187" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.330445 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jtrd4" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.391106 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data\") pod \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\" (UID: \"3c282ef1-ca4b-4eb8-9acb-ed95a92625a4\") " Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.396950 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data" (OuterVolumeSpecName: "config-data") pod "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" (UID: "3c282ef1-ca4b-4eb8-9acb-ed95a92625a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.435073 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 21 09:21:00 crc kubenswrapper[4932]: E0321 09:21:00.435649 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.435674 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:21:00 crc kubenswrapper[4932]: E0321 09:21:00.435695 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" containerName="nova-cell0-conductor-db-sync" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.435704 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" containerName="nova-cell0-conductor-db-sync" Mar 21 09:21:00 crc kubenswrapper[4932]: E0321 09:21:00.435721 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.435729 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.435984 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" containerName="nova-cell0-conductor-db-sync" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.436024 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4c5dc9-4c73-486d-9427-2a5f07da9e89" containerName="watcher-decision-engine" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.436891 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.470768 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.508071 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.508163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf9ns\" (UniqueName: \"kubernetes.io/projected/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-kube-api-access-pf9ns\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.508232 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.508518 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.610175 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf9ns\" (UniqueName: \"kubernetes.io/projected/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-kube-api-access-pf9ns\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.610273 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.610469 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.619370 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.630160 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.630950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf9ns\" (UniqueName: \"kubernetes.io/projected/ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad-kube-api-access-pf9ns\") pod \"nova-cell0-conductor-0\" (UID: \"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad\") " pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:00 crc kubenswrapper[4932]: I0321 09:21:00.814231 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:01 crc kubenswrapper[4932]: I0321 09:21:01.264590 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 21 09:21:01 crc kubenswrapper[4932]: I0321 09:21:01.340284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad","Type":"ContainerStarted","Data":"e9d69cb8a5cbd7ce788415b33bdab8b406a5e6c45a083c61c77b2daf976cd1d0"} Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.359585 4932 generic.go:334] "Generic (PLEG): container finished" podID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerID="9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc" exitCode=0 Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.359660 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerDied","Data":"9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc"} Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.364420 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad","Type":"ContainerStarted","Data":"d3d5b9a94e006b2f1ed46d7a270ca4191843e41487ddf1f5060012d527a89ad3"} Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.365566 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.370765 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07" exitCode=1 Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.370815 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07"} Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.370855 4932 scope.go:117] "RemoveContainer" containerID="0c5a762b3cf40820ecdcf5d64c48348d7f99b83092f0dbb95a8bbf1e5761e034" Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.371876 4932 scope.go:117] "RemoveContainer" containerID="9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07" Mar 21 09:21:02 crc kubenswrapper[4932]: E0321 09:21:02.372109 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:21:02 crc kubenswrapper[4932]: I0321 09:21:02.429313 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.429284904 podStartE2EDuration="2.429284904s" podCreationTimestamp="2026-03-21 09:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:02.397569235 +0000 UTC m=+1365.992767514" watchObservedRunningTime="2026-03-21 09:21:02.429284904 +0000 UTC m=+1366.024483183" Mar 21 09:21:04 crc kubenswrapper[4932]: I0321 09:21:04.949542 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 21 09:21:04 crc kubenswrapper[4932]: I0321 09:21:04.979084 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 21 09:21:05 crc kubenswrapper[4932]: I0321 09:21:05.402433 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 21 09:21:05 crc kubenswrapper[4932]: I0321 09:21:05.429946 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 21 09:21:07 crc kubenswrapper[4932]: I0321 09:21:07.948154 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:21:07 crc kubenswrapper[4932]: I0321 09:21:07.948526 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:21:07 crc kubenswrapper[4932]: I0321 09:21:07.949572 4932 scope.go:117] "RemoveContainer" containerID="9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07" Mar 21 09:21:07 crc kubenswrapper[4932]: E0321 09:21:07.949901 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:21:10 crc kubenswrapper[4932]: I0321 09:21:10.702240 4932 scope.go:117] "RemoveContainer" containerID="0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47" Mar 21 09:21:10 crc kubenswrapper[4932]: E0321 09:21:10.702498 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:21:10 crc kubenswrapper[4932]: I0321 09:21:10.848121 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.422599 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pwhrl"] Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.424185 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.425968 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.426861 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.442245 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.442388 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-config-data\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.442474 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lnw4\" (UniqueName: \"kubernetes.io/projected/7c9966f4-a444-47a1-9394-7d0483967734-kube-api-access-5lnw4\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.442548 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-scripts\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.447709 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pwhrl"] Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.548570 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.557577 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-config-data\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.557699 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lnw4\" (UniqueName: \"kubernetes.io/projected/7c9966f4-a444-47a1-9394-7d0483967734-kube-api-access-5lnw4\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.557854 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-scripts\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.568295 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-config-data\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.569314 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-scripts\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.574129 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.590827 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lnw4\" (UniqueName: \"kubernetes.io/projected/7c9966f4-a444-47a1-9394-7d0483967734-kube-api-access-5lnw4\") pod \"nova-cell0-cell-mapping-pwhrl\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.671473 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.673869 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.685937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.693510 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:11 crc kubenswrapper[4932]: I0321 09:21:11.765357 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.765676 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6fsz\" (UniqueName: \"kubernetes.io/projected/e1a88675-1241-40f6-9da5-e4f91db28452-kube-api-access-t6fsz\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.765888 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a88675-1241-40f6-9da5-e4f91db28452-logs\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.765970 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.766044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-config-data\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.872947 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-config-data\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.873047 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6fsz\" (UniqueName: \"kubernetes.io/projected/e1a88675-1241-40f6-9da5-e4f91db28452-kube-api-access-t6fsz\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.873174 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a88675-1241-40f6-9da5-e4f91db28452-logs\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.873238 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.875368 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a88675-1241-40f6-9da5-e4f91db28452-logs\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.891947 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.900980 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-config-data\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:11.914338 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6fsz\" (UniqueName: \"kubernetes.io/projected/e1a88675-1241-40f6-9da5-e4f91db28452-kube-api-access-t6fsz\") pod \"nova-api-0\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.085911 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.105868 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.105914 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.106082 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.107263 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.107288 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.108705 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.109841 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.109869 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56b6d686c9-jxr9g"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.110748 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.111046 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.111336 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56b6d686c9-jxr9g"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.111451 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.111647 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.116158 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.135305 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.184682 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-config\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185081 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cee8412-51a7-4098-b576-1bd95bc7def8-logs\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185113 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-svc\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185140 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185169 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-config-data\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185184 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185202 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-sb\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185223 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-config-data\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185251 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-swift-storage-0\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185267 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq4q4\" (UniqueName: \"kubernetes.io/projected/16a5f9de-7325-4400-a5c6-17b478be66a1-kube-api-access-bq4q4\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185293 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-nb\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185434 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185469 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfsjq\" (UniqueName: \"kubernetes.io/projected/f9e577fb-4510-4fd9-8560-1fa627d3f94c-kube-api-access-zfsjq\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185529 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fkz6\" (UniqueName: \"kubernetes.io/projected/a4d25fc6-974f-4695-8d73-2783af6957f5-kube-api-access-9fkz6\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.185548 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpd6w\" (UniqueName: \"kubernetes.io/projected/7cee8412-51a7-4098-b576-1bd95bc7def8-kube-api-access-gpd6w\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287493 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfsjq\" (UniqueName: \"kubernetes.io/projected/f9e577fb-4510-4fd9-8560-1fa627d3f94c-kube-api-access-zfsjq\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287538 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fkz6\" (UniqueName: \"kubernetes.io/projected/a4d25fc6-974f-4695-8d73-2783af6957f5-kube-api-access-9fkz6\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287560 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpd6w\" (UniqueName: \"kubernetes.io/projected/7cee8412-51a7-4098-b576-1bd95bc7def8-kube-api-access-gpd6w\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287576 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-config\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287599 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cee8412-51a7-4098-b576-1bd95bc7def8-logs\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-svc\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287655 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-config-data\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287724 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-sb\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-config-data\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287781 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-swift-storage-0\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287801 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq4q4\" (UniqueName: \"kubernetes.io/projected/16a5f9de-7325-4400-a5c6-17b478be66a1-kube-api-access-bq4q4\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.287865 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-nb\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.289121 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-config\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.290181 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-swift-storage-0\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.290831 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-sb\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.292027 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-svc\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.294078 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cee8412-51a7-4098-b576-1bd95bc7def8-logs\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.305151 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.306006 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-nb\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.308393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.312684 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-config-data\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.317124 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.317840 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpd6w\" (UniqueName: \"kubernetes.io/projected/7cee8412-51a7-4098-b576-1bd95bc7def8-kube-api-access-gpd6w\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.321614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq4q4\" (UniqueName: \"kubernetes.io/projected/16a5f9de-7325-4400-a5c6-17b478be66a1-kube-api-access-bq4q4\") pod \"dnsmasq-dns-56b6d686c9-jxr9g\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.322575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.322958 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-config-data\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.327983 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fkz6\" (UniqueName: \"kubernetes.io/projected/a4d25fc6-974f-4695-8d73-2783af6957f5-kube-api-access-9fkz6\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.328286 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfsjq\" (UniqueName: \"kubernetes.io/projected/f9e577fb-4510-4fd9-8560-1fa627d3f94c-kube-api-access-zfsjq\") pod \"nova-scheduler-0\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.461694 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.505820 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pwhrl"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.583317 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.601769 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.620133 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.698421 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.764174 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-92xlk"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.765941 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.771673 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.771830 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.783050 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-92xlk"] Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.800258 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.800937 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-scripts\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.801022 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5266\" (UniqueName: \"kubernetes.io/projected/98d55e78-bebc-4aa9-9043-a107dce766ab-kube-api-access-r5266\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.801046 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-config-data\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.903739 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.903849 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-scripts\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.903929 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5266\" (UniqueName: \"kubernetes.io/projected/98d55e78-bebc-4aa9-9043-a107dce766ab-kube-api-access-r5266\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.904552 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-config-data\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.911284 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-scripts\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.915483 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.926316 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-config-data\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:12 crc kubenswrapper[4932]: I0321 09:21:12.939444 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5266\" (UniqueName: \"kubernetes.io/projected/98d55e78-bebc-4aa9-9043-a107dce766ab-kube-api-access-r5266\") pod \"nova-cell1-conductor-db-sync-92xlk\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.021979 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.217593 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.377424 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56b6d686c9-jxr9g"] Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.412431 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.550498 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cee8412-51a7-4098-b576-1bd95bc7def8","Type":"ContainerStarted","Data":"7d81a2b240145f33b40dfd07630ef7b6f10b457acb40a132711ee93c1948b03e"} Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.555557 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1a88675-1241-40f6-9da5-e4f91db28452","Type":"ContainerStarted","Data":"959cde1fa192a0d75de555a656f18657efc10ce8de226bb1cc0fc2f7bc8ce87a"} Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.566013 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" event={"ID":"16a5f9de-7325-4400-a5c6-17b478be66a1","Type":"ContainerStarted","Data":"6d35a3fd5d24b95b8e1c0918a7cfd8301929dd6e45eea3b6ea8422ef1c8afc4a"} Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.570791 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pwhrl" event={"ID":"7c9966f4-a444-47a1-9394-7d0483967734","Type":"ContainerStarted","Data":"36ef173a5f14536087ad02cd2c4e7c5c85f38eaa9c9c63e3aa0461cf3e6eae93"} Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.570842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pwhrl" event={"ID":"7c9966f4-a444-47a1-9394-7d0483967734","Type":"ContainerStarted","Data":"be14166f26afefae39df344ed0b912b72268310b1e1c52ad55c5e7ce7dff2431"} Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.580654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f9e577fb-4510-4fd9-8560-1fa627d3f94c","Type":"ContainerStarted","Data":"cccca58e1b671a67a1770a461dea6757edd68e1d56f3d58f2f514ad5c21af398"} Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.614069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pwhrl" podStartSLOduration=2.614035477 podStartE2EDuration="2.614035477s" podCreationTimestamp="2026-03-21 09:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:13.602260816 +0000 UTC m=+1377.197459085" watchObservedRunningTime="2026-03-21 09:21:13.614035477 +0000 UTC m=+1377.209233746" Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.765033 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:13 crc kubenswrapper[4932]: I0321 09:21:13.977397 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-92xlk"] Mar 21 09:21:14 crc kubenswrapper[4932]: I0321 09:21:14.604821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4d25fc6-974f-4695-8d73-2783af6957f5","Type":"ContainerStarted","Data":"ab8113c13ae044f001d2099734ece2322b791d4594d4cc52a33138c0ee34da06"} Mar 21 09:21:14 crc kubenswrapper[4932]: I0321 09:21:14.607486 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-92xlk" event={"ID":"98d55e78-bebc-4aa9-9043-a107dce766ab","Type":"ContainerStarted","Data":"7f9e8bdaca3cd0e46e062a109bef725daec77c2c828df2ff096430f39d58156c"} Mar 21 09:21:14 crc kubenswrapper[4932]: I0321 09:21:14.607512 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-92xlk" event={"ID":"98d55e78-bebc-4aa9-9043-a107dce766ab","Type":"ContainerStarted","Data":"e99d853dcb6ecb943df7dc329cff9e8e7f4ff728a49a1158474db10f63e12b3e"} Mar 21 09:21:14 crc kubenswrapper[4932]: I0321 09:21:14.617447 4932 generic.go:334] "Generic (PLEG): container finished" podID="16a5f9de-7325-4400-a5c6-17b478be66a1" containerID="3b538998e426263559dac55cbb0e4cb760981ffb2b7b2c36ebc37d138b6f8535" exitCode=0 Mar 21 09:21:14 crc kubenswrapper[4932]: I0321 09:21:14.619023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" event={"ID":"16a5f9de-7325-4400-a5c6-17b478be66a1","Type":"ContainerDied","Data":"3b538998e426263559dac55cbb0e4cb760981ffb2b7b2c36ebc37d138b6f8535"} Mar 21 09:21:14 crc kubenswrapper[4932]: I0321 09:21:14.631755 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-92xlk" podStartSLOduration=2.631728941 podStartE2EDuration="2.631728941s" podCreationTimestamp="2026-03-21 09:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:14.628933553 +0000 UTC m=+1378.224131832" watchObservedRunningTime="2026-03-21 09:21:14.631728941 +0000 UTC m=+1378.226927210" Mar 21 09:21:15 crc kubenswrapper[4932]: I0321 09:21:15.672434 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" event={"ID":"16a5f9de-7325-4400-a5c6-17b478be66a1","Type":"ContainerStarted","Data":"ef670a2965d6f38e980f4164920fd54dcf8c01355790c623143d781d5154f29c"} Mar 21 09:21:15 crc kubenswrapper[4932]: I0321 09:21:15.672957 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:15 crc kubenswrapper[4932]: I0321 09:21:15.734269 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" podStartSLOduration=4.734237577 podStartE2EDuration="4.734237577s" podCreationTimestamp="2026-03-21 09:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:15.702831397 +0000 UTC m=+1379.298029666" watchObservedRunningTime="2026-03-21 09:21:15.734237577 +0000 UTC m=+1379.329435846" Mar 21 09:21:15 crc kubenswrapper[4932]: I0321 09:21:15.852886 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:15 crc kubenswrapper[4932]: I0321 09:21:15.924762 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.704358 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1a88675-1241-40f6-9da5-e4f91db28452","Type":"ContainerStarted","Data":"f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582"} Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.705077 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1a88675-1241-40f6-9da5-e4f91db28452","Type":"ContainerStarted","Data":"03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946"} Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.706026 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4d25fc6-974f-4695-8d73-2783af6957f5","Type":"ContainerStarted","Data":"8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68"} Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.706107 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a4d25fc6-974f-4695-8d73-2783af6957f5" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68" gracePeriod=30 Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.707957 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f9e577fb-4510-4fd9-8560-1fa627d3f94c","Type":"ContainerStarted","Data":"cd016e85f6a690bd4fc3b121d848a467b61ea5ad4cea20cb47a7294111e5da92"} Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.713840 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cee8412-51a7-4098-b576-1bd95bc7def8","Type":"ContainerStarted","Data":"b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1"} Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.713880 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cee8412-51a7-4098-b576-1bd95bc7def8","Type":"ContainerStarted","Data":"dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f"} Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.714007 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerName="nova-metadata-metadata" containerID="cri-o://b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1" gracePeriod=30 Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.714001 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerName="nova-metadata-log" containerID="cri-o://dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f" gracePeriod=30 Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.822436 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.845024877 podStartE2EDuration="7.822404982s" podCreationTimestamp="2026-03-21 09:21:11 +0000 UTC" firstStartedPulling="2026-03-21 09:21:12.772337531 +0000 UTC m=+1376.367535790" lastFinishedPulling="2026-03-21 09:21:17.749717626 +0000 UTC m=+1381.344915895" observedRunningTime="2026-03-21 09:21:18.74366909 +0000 UTC m=+1382.338867369" watchObservedRunningTime="2026-03-21 09:21:18.822404982 +0000 UTC m=+1382.417603271" Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.886874 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.942406302 podStartE2EDuration="7.886852793s" podCreationTimestamp="2026-03-21 09:21:11 +0000 UTC" firstStartedPulling="2026-03-21 09:21:13.805581694 +0000 UTC m=+1377.400779963" lastFinishedPulling="2026-03-21 09:21:17.750028185 +0000 UTC m=+1381.345226454" observedRunningTime="2026-03-21 09:21:18.796459364 +0000 UTC m=+1382.391657643" watchObservedRunningTime="2026-03-21 09:21:18.886852793 +0000 UTC m=+1382.482051062" Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.904242 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.198527459 podStartE2EDuration="7.904221861s" podCreationTimestamp="2026-03-21 09:21:11 +0000 UTC" firstStartedPulling="2026-03-21 09:21:13.047272346 +0000 UTC m=+1376.642470615" lastFinishedPulling="2026-03-21 09:21:17.752966758 +0000 UTC m=+1381.348165017" observedRunningTime="2026-03-21 09:21:18.89723772 +0000 UTC m=+1382.492435999" watchObservedRunningTime="2026-03-21 09:21:18.904221861 +0000 UTC m=+1382.499420130" Mar 21 09:21:18 crc kubenswrapper[4932]: I0321 09:21:18.926618 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.6045847760000003 podStartE2EDuration="7.926595036s" podCreationTimestamp="2026-03-21 09:21:11 +0000 UTC" firstStartedPulling="2026-03-21 09:21:13.429641836 +0000 UTC m=+1377.024840105" lastFinishedPulling="2026-03-21 09:21:17.751652096 +0000 UTC m=+1381.346850365" observedRunningTime="2026-03-21 09:21:18.922847767 +0000 UTC m=+1382.518046036" watchObservedRunningTime="2026-03-21 09:21:18.926595036 +0000 UTC m=+1382.521793305" Mar 21 09:21:19 crc kubenswrapper[4932]: I0321 09:21:19.726167 4932 generic.go:334] "Generic (PLEG): container finished" podID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerID="dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f" exitCode=143 Mar 21 09:21:19 crc kubenswrapper[4932]: I0321 09:21:19.726577 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cee8412-51a7-4098-b576-1bd95bc7def8","Type":"ContainerDied","Data":"dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f"} Mar 21 09:21:19 crc kubenswrapper[4932]: I0321 09:21:19.833221 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.161150 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.213739 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cee8412-51a7-4098-b576-1bd95bc7def8-logs\") pod \"7cee8412-51a7-4098-b576-1bd95bc7def8\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.214009 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-config-data\") pod \"7cee8412-51a7-4098-b576-1bd95bc7def8\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.214046 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpd6w\" (UniqueName: \"kubernetes.io/projected/7cee8412-51a7-4098-b576-1bd95bc7def8-kube-api-access-gpd6w\") pod \"7cee8412-51a7-4098-b576-1bd95bc7def8\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.214064 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-combined-ca-bundle\") pod \"7cee8412-51a7-4098-b576-1bd95bc7def8\" (UID: \"7cee8412-51a7-4098-b576-1bd95bc7def8\") " Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.214198 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cee8412-51a7-4098-b576-1bd95bc7def8-logs" (OuterVolumeSpecName: "logs") pod "7cee8412-51a7-4098-b576-1bd95bc7def8" (UID: "7cee8412-51a7-4098-b576-1bd95bc7def8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.214596 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cee8412-51a7-4098-b576-1bd95bc7def8-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.221113 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cee8412-51a7-4098-b576-1bd95bc7def8-kube-api-access-gpd6w" (OuterVolumeSpecName: "kube-api-access-gpd6w") pod "7cee8412-51a7-4098-b576-1bd95bc7def8" (UID: "7cee8412-51a7-4098-b576-1bd95bc7def8"). InnerVolumeSpecName "kube-api-access-gpd6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.244736 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-config-data" (OuterVolumeSpecName: "config-data") pod "7cee8412-51a7-4098-b576-1bd95bc7def8" (UID: "7cee8412-51a7-4098-b576-1bd95bc7def8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.255686 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cee8412-51a7-4098-b576-1bd95bc7def8" (UID: "7cee8412-51a7-4098-b576-1bd95bc7def8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.316342 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.316397 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpd6w\" (UniqueName: \"kubernetes.io/projected/7cee8412-51a7-4098-b576-1bd95bc7def8-kube-api-access-gpd6w\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.316409 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cee8412-51a7-4098-b576-1bd95bc7def8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.738880 4932 generic.go:334] "Generic (PLEG): container finished" podID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerID="b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1" exitCode=0 Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.738930 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cee8412-51a7-4098-b576-1bd95bc7def8","Type":"ContainerDied","Data":"b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1"} Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.739211 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cee8412-51a7-4098-b576-1bd95bc7def8","Type":"ContainerDied","Data":"7d81a2b240145f33b40dfd07630ef7b6f10b457acb40a132711ee93c1948b03e"} Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.739233 4932 scope.go:117] "RemoveContainer" containerID="b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.738975 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.776322 4932 scope.go:117] "RemoveContainer" containerID="dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.778747 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.801922 4932 scope.go:117] "RemoveContainer" containerID="b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1" Mar 21 09:21:20 crc kubenswrapper[4932]: E0321 09:21:20.802447 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1\": container with ID starting with b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1 not found: ID does not exist" containerID="b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.802505 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1"} err="failed to get container status \"b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1\": rpc error: code = NotFound desc = could not find container \"b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1\": container with ID starting with b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1 not found: ID does not exist" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.802542 4932 scope.go:117] "RemoveContainer" containerID="dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.802668 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:20 crc kubenswrapper[4932]: E0321 09:21:20.802941 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f\": container with ID starting with dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f not found: ID does not exist" containerID="dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.802963 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f"} err="failed to get container status \"dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f\": rpc error: code = NotFound desc = could not find container \"dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f\": container with ID starting with dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f not found: ID does not exist" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.814118 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:20 crc kubenswrapper[4932]: E0321 09:21:20.814724 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerName="nova-metadata-metadata" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.814752 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerName="nova-metadata-metadata" Mar 21 09:21:20 crc kubenswrapper[4932]: E0321 09:21:20.814778 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerName="nova-metadata-log" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.814788 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerName="nova-metadata-log" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.815159 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerName="nova-metadata-metadata" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.815200 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" containerName="nova-metadata-log" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.816661 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.821902 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.822430 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.824734 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.932292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-logs\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.932372 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.932483 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48m89\" (UniqueName: \"kubernetes.io/projected/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-kube-api-access-48m89\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.932626 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-config-data\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:20 crc kubenswrapper[4932]: I0321 09:21:20.932687 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.034656 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48m89\" (UniqueName: \"kubernetes.io/projected/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-kube-api-access-48m89\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.034809 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-config-data\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.034880 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.034946 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-logs\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.034996 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.035359 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-logs\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.040092 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-config-data\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.040549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.043787 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.052466 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48m89\" (UniqueName: \"kubernetes.io/projected/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-kube-api-access-48m89\") pod \"nova-metadata-0\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.151876 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.661977 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.703835 4932 scope.go:117] "RemoveContainer" containerID="9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07" Mar 21 09:21:21 crc kubenswrapper[4932]: E0321 09:21:21.704170 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.722689 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cee8412-51a7-4098-b576-1bd95bc7def8" path="/var/lib/kubelet/pods/7cee8412-51a7-4098-b576-1bd95bc7def8/volumes" Mar 21 09:21:21 crc kubenswrapper[4932]: I0321 09:21:21.751877 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898","Type":"ContainerStarted","Data":"2db0f11072a2b57a619bf40a3d8c7ef110b2be0455ba96bb5b7e6cba80e436f5"} Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.136774 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.136845 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.583875 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.584212 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.606542 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.615994 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.621187 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.686739 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c4d945f5-9vdcr"] Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.687339 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" podUID="03273311-f853-47e1-a73b-649485129727" containerName="dnsmasq-dns" containerID="cri-o://a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29" gracePeriod=10 Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.764987 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898","Type":"ContainerStarted","Data":"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361"} Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.765051 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898","Type":"ContainerStarted","Data":"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8"} Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.804548 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.80452171 podStartE2EDuration="2.80452171s" podCreationTimestamp="2026-03-21 09:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:22.788815306 +0000 UTC m=+1386.384013575" watchObservedRunningTime="2026-03-21 09:21:22.80452171 +0000 UTC m=+1386.399719969" Mar 21 09:21:22 crc kubenswrapper[4932]: I0321 09:21:22.806982 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.221629 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.221940 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.285328 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.417567 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-svc\") pod \"03273311-f853-47e1-a73b-649485129727\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.417653 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-swift-storage-0\") pod \"03273311-f853-47e1-a73b-649485129727\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.417832 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-nb\") pod \"03273311-f853-47e1-a73b-649485129727\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.417850 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94xf\" (UniqueName: \"kubernetes.io/projected/03273311-f853-47e1-a73b-649485129727-kube-api-access-c94xf\") pod \"03273311-f853-47e1-a73b-649485129727\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.417915 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-sb\") pod \"03273311-f853-47e1-a73b-649485129727\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.417967 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-config\") pod \"03273311-f853-47e1-a73b-649485129727\" (UID: \"03273311-f853-47e1-a73b-649485129727\") " Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.425766 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03273311-f853-47e1-a73b-649485129727-kube-api-access-c94xf" (OuterVolumeSpecName: "kube-api-access-c94xf") pod "03273311-f853-47e1-a73b-649485129727" (UID: "03273311-f853-47e1-a73b-649485129727"). InnerVolumeSpecName "kube-api-access-c94xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.491075 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03273311-f853-47e1-a73b-649485129727" (UID: "03273311-f853-47e1-a73b-649485129727"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.492087 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-config" (OuterVolumeSpecName: "config") pod "03273311-f853-47e1-a73b-649485129727" (UID: "03273311-f853-47e1-a73b-649485129727"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.516466 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03273311-f853-47e1-a73b-649485129727" (UID: "03273311-f853-47e1-a73b-649485129727"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.517881 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "03273311-f853-47e1-a73b-649485129727" (UID: "03273311-f853-47e1-a73b-649485129727"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.520904 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.521140 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94xf\" (UniqueName: \"kubernetes.io/projected/03273311-f853-47e1-a73b-649485129727-kube-api-access-c94xf\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.521235 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.521323 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.521441 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.556418 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03273311-f853-47e1-a73b-649485129727" (UID: "03273311-f853-47e1-a73b-649485129727"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.623923 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03273311-f853-47e1-a73b-649485129727-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.777729 4932 generic.go:334] "Generic (PLEG): container finished" podID="7c9966f4-a444-47a1-9394-7d0483967734" containerID="36ef173a5f14536087ad02cd2c4e7c5c85f38eaa9c9c63e3aa0461cf3e6eae93" exitCode=0 Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.777759 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pwhrl" event={"ID":"7c9966f4-a444-47a1-9394-7d0483967734","Type":"ContainerDied","Data":"36ef173a5f14536087ad02cd2c4e7c5c85f38eaa9c9c63e3aa0461cf3e6eae93"} Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.780655 4932 generic.go:334] "Generic (PLEG): container finished" podID="03273311-f853-47e1-a73b-649485129727" containerID="a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29" exitCode=0 Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.780686 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" event={"ID":"03273311-f853-47e1-a73b-649485129727","Type":"ContainerDied","Data":"a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29"} Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.780735 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" event={"ID":"03273311-f853-47e1-a73b-649485129727","Type":"ContainerDied","Data":"14b0a49520509811d46e21e5056c067df8f2ed97dba388f0850c2633ccd25f3b"} Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.780763 4932 scope.go:117] "RemoveContainer" containerID="a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.781006 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c4d945f5-9vdcr" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.817260 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c4d945f5-9vdcr"] Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.821252 4932 scope.go:117] "RemoveContainer" containerID="bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.828754 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84c4d945f5-9vdcr"] Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.865632 4932 scope.go:117] "RemoveContainer" containerID="a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29" Mar 21 09:21:23 crc kubenswrapper[4932]: E0321 09:21:23.866244 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29\": container with ID starting with a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29 not found: ID does not exist" containerID="a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.866293 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29"} err="failed to get container status \"a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29\": rpc error: code = NotFound desc = could not find container \"a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29\": container with ID starting with a99c2eb22b8103019b689583411c1565d9493db8454fa2c44fdc840594616c29 not found: ID does not exist" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.866328 4932 scope.go:117] "RemoveContainer" containerID="bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888" Mar 21 09:21:23 crc kubenswrapper[4932]: E0321 09:21:23.866615 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888\": container with ID starting with bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888 not found: ID does not exist" containerID="bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888" Mar 21 09:21:23 crc kubenswrapper[4932]: I0321 09:21:23.866638 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888"} err="failed to get container status \"bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888\": rpc error: code = NotFound desc = could not find container \"bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888\": container with ID starting with bd9b7d9985d3b59570c9847bb9182739f921ae7a0f90b587b510ac9ea5b95888 not found: ID does not exist" Mar 21 09:21:24 crc kubenswrapper[4932]: W0321 09:21:24.272748 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cee8412_51a7_4098_b576_1bd95bc7def8.slice/crio-dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f.scope WatchSource:0}: Error finding container dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f: Status 404 returned error can't find the container with id dd60c85f8d74611d382b712798ea16d10fe843221e835a82d7559d0a8491978f Mar 21 09:21:24 crc kubenswrapper[4932]: W0321 09:21:24.277350 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cee8412_51a7_4098_b576_1bd95bc7def8.slice/crio-b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1.scope WatchSource:0}: Error finding container b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1: Status 404 returned error can't find the container with id b2815c1621ee386dade2ddd95b3bf6921cc1c3253a5f5da82245eceeec02b3e1 Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.693157 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.703553 4932 scope.go:117] "RemoveContainer" containerID="0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47" Mar 21 09:21:24 crc kubenswrapper[4932]: E0321 09:21:24.703904 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.795608 4932 generic.go:334] "Generic (PLEG): container finished" podID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerID="00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1" exitCode=137 Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.795689 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.795724 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerDied","Data":"00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1"} Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.795788 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f15063b8-98f3-4a4b-94b3-4db808f7ad73","Type":"ContainerDied","Data":"561faa8fbbcc5dc34e88defd87906126e891902bafc8ad285364375053dd3a64"} Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.795811 4932 scope.go:117] "RemoveContainer" containerID="00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.850873 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vnjl\" (UniqueName: \"kubernetes.io/projected/f15063b8-98f3-4a4b-94b3-4db808f7ad73-kube-api-access-7vnjl\") pod \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.851185 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-combined-ca-bundle\") pod \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.851419 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-log-httpd\") pod \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.851452 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-sg-core-conf-yaml\") pod \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.851524 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-scripts\") pod \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.851547 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-run-httpd\") pod \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.851594 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-config-data\") pod \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\" (UID: \"f15063b8-98f3-4a4b-94b3-4db808f7ad73\") " Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.853203 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f15063b8-98f3-4a4b-94b3-4db808f7ad73" (UID: "f15063b8-98f3-4a4b-94b3-4db808f7ad73"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.854343 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f15063b8-98f3-4a4b-94b3-4db808f7ad73" (UID: "f15063b8-98f3-4a4b-94b3-4db808f7ad73"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.867572 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15063b8-98f3-4a4b-94b3-4db808f7ad73-kube-api-access-7vnjl" (OuterVolumeSpecName: "kube-api-access-7vnjl") pod "f15063b8-98f3-4a4b-94b3-4db808f7ad73" (UID: "f15063b8-98f3-4a4b-94b3-4db808f7ad73"). InnerVolumeSpecName "kube-api-access-7vnjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.869980 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-scripts" (OuterVolumeSpecName: "scripts") pod "f15063b8-98f3-4a4b-94b3-4db808f7ad73" (UID: "f15063b8-98f3-4a4b-94b3-4db808f7ad73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.909504 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f15063b8-98f3-4a4b-94b3-4db808f7ad73" (UID: "f15063b8-98f3-4a4b-94b3-4db808f7ad73"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.931439 4932 scope.go:117] "RemoveContainer" containerID="ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.954232 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.954286 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.954297 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.954305 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f15063b8-98f3-4a4b-94b3-4db808f7ad73-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.954315 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vnjl\" (UniqueName: \"kubernetes.io/projected/f15063b8-98f3-4a4b-94b3-4db808f7ad73-kube-api-access-7vnjl\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:24 crc kubenswrapper[4932]: I0321 09:21:24.983126 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f15063b8-98f3-4a4b-94b3-4db808f7ad73" (UID: "f15063b8-98f3-4a4b-94b3-4db808f7ad73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.022640 4932 scope.go:117] "RemoveContainer" containerID="24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.056545 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.073355 4932 scope.go:117] "RemoveContainer" containerID="9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.080789 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-config-data" (OuterVolumeSpecName: "config-data") pod "f15063b8-98f3-4a4b-94b3-4db808f7ad73" (UID: "f15063b8-98f3-4a4b-94b3-4db808f7ad73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.106364 4932 scope.go:117] "RemoveContainer" containerID="00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.106881 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1\": container with ID starting with 00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1 not found: ID does not exist" containerID="00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.106919 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1"} err="failed to get container status \"00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1\": rpc error: code = NotFound desc = could not find container \"00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1\": container with ID starting with 00247f0827b19b73c5879ba2a6e48fe00aa91c63a6893f07e15086113ca572d1 not found: ID does not exist" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.106948 4932 scope.go:117] "RemoveContainer" containerID="ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.107337 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453\": container with ID starting with ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453 not found: ID does not exist" containerID="ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.107364 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453"} err="failed to get container status \"ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453\": rpc error: code = NotFound desc = could not find container \"ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453\": container with ID starting with ba96f598ae13ffece49365532a96cc22abd648dcb65a34f2c2df8fa3192b6453 not found: ID does not exist" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.107381 4932 scope.go:117] "RemoveContainer" containerID="24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.109187 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729\": container with ID starting with 24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729 not found: ID does not exist" containerID="24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.109228 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729"} err="failed to get container status \"24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729\": rpc error: code = NotFound desc = could not find container \"24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729\": container with ID starting with 24bb4e088a2b4c22bef86b533f31c7db199df64178e0f70164d46afe9b05c729 not found: ID does not exist" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.109254 4932 scope.go:117] "RemoveContainer" containerID="9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.109677 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc\": container with ID starting with 9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc not found: ID does not exist" containerID="9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.109701 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc"} err="failed to get container status \"9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc\": rpc error: code = NotFound desc = could not find container \"9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc\": container with ID starting with 9842900cdb3b650179c906a1b62e341ac3120edc8b81681c935baf3f6bb498cc not found: ID does not exist" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.159861 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f15063b8-98f3-4a4b-94b3-4db808f7ad73-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.164542 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.180462 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194069 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.194608 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="ceilometer-central-agent" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194626 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="ceilometer-central-agent" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.194657 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="sg-core" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194667 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="sg-core" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.194682 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03273311-f853-47e1-a73b-649485129727" containerName="init" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194690 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="03273311-f853-47e1-a73b-649485129727" containerName="init" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.194707 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="ceilometer-notification-agent" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194715 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="ceilometer-notification-agent" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.194729 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03273311-f853-47e1-a73b-649485129727" containerName="dnsmasq-dns" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194735 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="03273311-f853-47e1-a73b-649485129727" containerName="dnsmasq-dns" Mar 21 09:21:25 crc kubenswrapper[4932]: E0321 09:21:25.194751 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="proxy-httpd" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194756 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="proxy-httpd" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194935 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="ceilometer-central-agent" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194948 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="proxy-httpd" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194960 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="sg-core" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194972 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="03273311-f853-47e1-a73b-649485129727" containerName="dnsmasq-dns" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.194985 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" containerName="ceilometer-notification-agent" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.198978 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.207724 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.208439 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.236466 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.363732 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-run-httpd\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.364317 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-log-httpd\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.364453 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-config-data\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.364576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.364821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.365077 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-scripts\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.365476 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsczm\" (UniqueName: \"kubernetes.io/projected/2962ecba-b083-4132-bd2c-eac94506c576-kube-api-access-tsczm\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.464559 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.468021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsczm\" (UniqueName: \"kubernetes.io/projected/2962ecba-b083-4132-bd2c-eac94506c576-kube-api-access-tsczm\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.468088 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-run-httpd\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.468115 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-log-httpd\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.468137 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-config-data\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.468166 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.468252 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.468321 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-scripts\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.469167 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-log-httpd\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.469240 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-run-httpd\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.474290 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.474829 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.475375 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-config-data\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.477299 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-scripts\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.500257 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsczm\" (UniqueName: \"kubernetes.io/projected/2962ecba-b083-4132-bd2c-eac94506c576-kube-api-access-tsczm\") pod \"ceilometer-0\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.570677 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lnw4\" (UniqueName: \"kubernetes.io/projected/7c9966f4-a444-47a1-9394-7d0483967734-kube-api-access-5lnw4\") pod \"7c9966f4-a444-47a1-9394-7d0483967734\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.570821 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-scripts\") pod \"7c9966f4-a444-47a1-9394-7d0483967734\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.571009 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-config-data\") pod \"7c9966f4-a444-47a1-9394-7d0483967734\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.571220 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-combined-ca-bundle\") pod \"7c9966f4-a444-47a1-9394-7d0483967734\" (UID: \"7c9966f4-a444-47a1-9394-7d0483967734\") " Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.575119 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9966f4-a444-47a1-9394-7d0483967734-kube-api-access-5lnw4" (OuterVolumeSpecName: "kube-api-access-5lnw4") pod "7c9966f4-a444-47a1-9394-7d0483967734" (UID: "7c9966f4-a444-47a1-9394-7d0483967734"). InnerVolumeSpecName "kube-api-access-5lnw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.577576 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.599624 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-scripts" (OuterVolumeSpecName: "scripts") pod "7c9966f4-a444-47a1-9394-7d0483967734" (UID: "7c9966f4-a444-47a1-9394-7d0483967734"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.616110 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c9966f4-a444-47a1-9394-7d0483967734" (UID: "7c9966f4-a444-47a1-9394-7d0483967734"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.618560 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-config-data" (OuterVolumeSpecName: "config-data") pod "7c9966f4-a444-47a1-9394-7d0483967734" (UID: "7c9966f4-a444-47a1-9394-7d0483967734"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.675295 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.675330 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.675341 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lnw4\" (UniqueName: \"kubernetes.io/projected/7c9966f4-a444-47a1-9394-7d0483967734-kube-api-access-5lnw4\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.675351 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c9966f4-a444-47a1-9394-7d0483967734-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.719760 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03273311-f853-47e1-a73b-649485129727" path="/var/lib/kubelet/pods/03273311-f853-47e1-a73b-649485129727/volumes" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.720768 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f15063b8-98f3-4a4b-94b3-4db808f7ad73" path="/var/lib/kubelet/pods/f15063b8-98f3-4a4b-94b3-4db808f7ad73/volumes" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.810419 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pwhrl" event={"ID":"7c9966f4-a444-47a1-9394-7d0483967734","Type":"ContainerDied","Data":"be14166f26afefae39df344ed0b912b72268310b1e1c52ad55c5e7ce7dff2431"} Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.810463 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be14166f26afefae39df344ed0b912b72268310b1e1c52ad55c5e7ce7dff2431" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.810527 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pwhrl" Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.992920 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.996971 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-log" containerID="cri-o://03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946" gracePeriod=30 Mar 21 09:21:25 crc kubenswrapper[4932]: I0321 09:21:25.997779 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-api" containerID="cri-o://f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582" gracePeriod=30 Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.009869 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.010232 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f9e577fb-4510-4fd9-8560-1fa627d3f94c" containerName="nova-scheduler-scheduler" containerID="cri-o://cd016e85f6a690bd4fc3b121d848a467b61ea5ad4cea20cb47a7294111e5da92" gracePeriod=30 Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.066820 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.105320 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.105559 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerName="nova-metadata-log" containerID="cri-o://01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8" gracePeriod=30 Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.106024 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerName="nova-metadata-metadata" containerID="cri-o://8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361" gracePeriod=30 Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.782184 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.835360 4932 generic.go:334] "Generic (PLEG): container finished" podID="e1a88675-1241-40f6-9da5-e4f91db28452" containerID="03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946" exitCode=143 Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.835488 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1a88675-1241-40f6-9da5-e4f91db28452","Type":"ContainerDied","Data":"03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946"} Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.838432 4932 generic.go:334] "Generic (PLEG): container finished" podID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerID="8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361" exitCode=0 Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.838471 4932 generic.go:334] "Generic (PLEG): container finished" podID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerID="01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8" exitCode=143 Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.838512 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898","Type":"ContainerDied","Data":"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361"} Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.838531 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.838592 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898","Type":"ContainerDied","Data":"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8"} Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.838610 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898","Type":"ContainerDied","Data":"2db0f11072a2b57a619bf40a3d8c7ef110b2be0455ba96bb5b7e6cba80e436f5"} Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.838630 4932 scope.go:117] "RemoveContainer" containerID="8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.843708 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerStarted","Data":"258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077"} Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.843752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerStarted","Data":"9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852"} Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.843806 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerStarted","Data":"dfe3f8bbb4d0fb2f6b84b51fc7d98c7c469a2a1c3d53e2bdc1608638083dcf68"} Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.889162 4932 scope.go:117] "RemoveContainer" containerID="01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.918494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-combined-ca-bundle\") pod \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.918608 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-config-data\") pod \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.918922 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-nova-metadata-tls-certs\") pod \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.919024 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-logs\") pod \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.919334 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48m89\" (UniqueName: \"kubernetes.io/projected/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-kube-api-access-48m89\") pod \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\" (UID: \"8b93c479-5d47-4fb7-ac2f-d94ff8bb0898\") " Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.921287 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-logs" (OuterVolumeSpecName: "logs") pod "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" (UID: "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.932050 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-kube-api-access-48m89" (OuterVolumeSpecName: "kube-api-access-48m89") pod "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" (UID: "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898"). InnerVolumeSpecName "kube-api-access-48m89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.933607 4932 scope.go:117] "RemoveContainer" containerID="8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361" Mar 21 09:21:26 crc kubenswrapper[4932]: E0321 09:21:26.934209 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361\": container with ID starting with 8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361 not found: ID does not exist" containerID="8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.934255 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361"} err="failed to get container status \"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361\": rpc error: code = NotFound desc = could not find container \"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361\": container with ID starting with 8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361 not found: ID does not exist" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.934283 4932 scope.go:117] "RemoveContainer" containerID="01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8" Mar 21 09:21:26 crc kubenswrapper[4932]: E0321 09:21:26.934645 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8\": container with ID starting with 01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8 not found: ID does not exist" containerID="01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.934663 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8"} err="failed to get container status \"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8\": rpc error: code = NotFound desc = could not find container \"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8\": container with ID starting with 01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8 not found: ID does not exist" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.934675 4932 scope.go:117] "RemoveContainer" containerID="8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.935751 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361"} err="failed to get container status \"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361\": rpc error: code = NotFound desc = could not find container \"8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361\": container with ID starting with 8212b5b3a404a4155e9ed814b85dc033bac19ed640809588ffd8c65722e0f361 not found: ID does not exist" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.935775 4932 scope.go:117] "RemoveContainer" containerID="01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.936162 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8"} err="failed to get container status \"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8\": rpc error: code = NotFound desc = could not find container \"01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8\": container with ID starting with 01971430c1cc97bc95c57ccc04c8410f7c9aa12a7a9b25ce90e4736a3845b4e8 not found: ID does not exist" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.986721 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-config-data" (OuterVolumeSpecName: "config-data") pod "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" (UID: "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:26 crc kubenswrapper[4932]: I0321 09:21:26.994937 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" (UID: "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.020534 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" (UID: "8b93c479-5d47-4fb7-ac2f-d94ff8bb0898"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.022154 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48m89\" (UniqueName: \"kubernetes.io/projected/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-kube-api-access-48m89\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.022186 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.022196 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.022206 4932 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.022221 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.216608 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.234752 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.281455 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:27 crc kubenswrapper[4932]: E0321 09:21:27.282156 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerName="nova-metadata-log" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.282181 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerName="nova-metadata-log" Mar 21 09:21:27 crc kubenswrapper[4932]: E0321 09:21:27.282220 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerName="nova-metadata-metadata" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.282227 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerName="nova-metadata-metadata" Mar 21 09:21:27 crc kubenswrapper[4932]: E0321 09:21:27.282244 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9966f4-a444-47a1-9394-7d0483967734" containerName="nova-manage" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.282252 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9966f4-a444-47a1-9394-7d0483967734" containerName="nova-manage" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.282499 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerName="nova-metadata-metadata" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.282532 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" containerName="nova-metadata-log" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.282555 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9966f4-a444-47a1-9394-7d0483967734" containerName="nova-manage" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.284015 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.289206 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.289504 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.294023 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.430231 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.430277 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bhmw\" (UniqueName: \"kubernetes.io/projected/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-kube-api-access-8bhmw\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.430447 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-logs\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.430482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.430562 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-config-data\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.532829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.532871 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bhmw\" (UniqueName: \"kubernetes.io/projected/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-kube-api-access-8bhmw\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.532956 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-logs\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.532983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.533043 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-config-data\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.533880 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-logs\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.538585 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-config-data\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.539828 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.551462 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.558109 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bhmw\" (UniqueName: \"kubernetes.io/projected/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-kube-api-access-8bhmw\") pod \"nova-metadata-0\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " pod="openstack/nova-metadata-0" Mar 21 09:21:27 crc kubenswrapper[4932]: E0321 09:21:27.586270 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd016e85f6a690bd4fc3b121d848a467b61ea5ad4cea20cb47a7294111e5da92" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 21 09:21:27 crc kubenswrapper[4932]: E0321 09:21:27.588720 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd016e85f6a690bd4fc3b121d848a467b61ea5ad4cea20cb47a7294111e5da92" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 21 09:21:27 crc kubenswrapper[4932]: E0321 09:21:27.590704 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd016e85f6a690bd4fc3b121d848a467b61ea5ad4cea20cb47a7294111e5da92" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 21 09:21:27 crc kubenswrapper[4932]: E0321 09:21:27.590781 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f9e577fb-4510-4fd9-8560-1fa627d3f94c" containerName="nova-scheduler-scheduler" Mar 21 09:21:27 crc kubenswrapper[4932]: I0321 09:21:27.615254 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:27.726647 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b93c479-5d47-4fb7-ac2f-d94ff8bb0898" path="/var/lib/kubelet/pods/8b93c479-5d47-4fb7-ac2f-d94ff8bb0898/volumes" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:27.878443 4932 generic.go:334] "Generic (PLEG): container finished" podID="98d55e78-bebc-4aa9-9043-a107dce766ab" containerID="7f9e8bdaca3cd0e46e062a109bef725daec77c2c828df2ff096430f39d58156c" exitCode=0 Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:27.878808 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-92xlk" event={"ID":"98d55e78-bebc-4aa9-9043-a107dce766ab","Type":"ContainerDied","Data":"7f9e8bdaca3cd0e46e062a109bef725daec77c2c828df2ff096430f39d58156c"} Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.886430 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.892316 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerStarted","Data":"fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51"} Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.895235 4932 generic.go:334] "Generic (PLEG): container finished" podID="e1a88675-1241-40f6-9da5-e4f91db28452" containerID="f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582" exitCode=0 Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.895462 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.895949 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1a88675-1241-40f6-9da5-e4f91db28452","Type":"ContainerDied","Data":"f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582"} Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.895979 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1a88675-1241-40f6-9da5-e4f91db28452","Type":"ContainerDied","Data":"959cde1fa192a0d75de555a656f18657efc10ce8de226bb1cc0fc2f7bc8ce87a"} Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.896002 4932 scope.go:117] "RemoveContainer" containerID="f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.943539 4932 scope.go:117] "RemoveContainer" containerID="03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.975436 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.987091 4932 scope.go:117] "RemoveContainer" containerID="f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582" Mar 21 09:21:28 crc kubenswrapper[4932]: E0321 09:21:28.988130 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582\": container with ID starting with f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582 not found: ID does not exist" containerID="f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.988178 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582"} err="failed to get container status \"f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582\": rpc error: code = NotFound desc = could not find container \"f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582\": container with ID starting with f61b27cee3bc625b231f2d153381774caa32adf4a98097cecd401b3861d92582 not found: ID does not exist" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.988208 4932 scope.go:117] "RemoveContainer" containerID="03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946" Mar 21 09:21:28 crc kubenswrapper[4932]: E0321 09:21:28.988823 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946\": container with ID starting with 03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946 not found: ID does not exist" containerID="03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946" Mar 21 09:21:28 crc kubenswrapper[4932]: I0321 09:21:28.988891 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946"} err="failed to get container status \"03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946\": rpc error: code = NotFound desc = could not find container \"03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946\": container with ID starting with 03abb215cba99e27f9fcacd4795317bd52ec188dfc7de75df735181a9cb8a946 not found: ID does not exist" Mar 21 09:21:28 crc kubenswrapper[4932]: W0321 09:21:28.994816 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26cb3c10_3a23_4ea5_91e1_11dc9d91f3bd.slice/crio-edc04088e43b742acca26bff456c8014439783e78326f615e0ccece19ad655b8 WatchSource:0}: Error finding container edc04088e43b742acca26bff456c8014439783e78326f615e0ccece19ad655b8: Status 404 returned error can't find the container with id edc04088e43b742acca26bff456c8014439783e78326f615e0ccece19ad655b8 Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.074414 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-combined-ca-bundle\") pod \"e1a88675-1241-40f6-9da5-e4f91db28452\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.075835 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a88675-1241-40f6-9da5-e4f91db28452-logs\") pod \"e1a88675-1241-40f6-9da5-e4f91db28452\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.075899 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-config-data\") pod \"e1a88675-1241-40f6-9da5-e4f91db28452\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.076064 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6fsz\" (UniqueName: \"kubernetes.io/projected/e1a88675-1241-40f6-9da5-e4f91db28452-kube-api-access-t6fsz\") pod \"e1a88675-1241-40f6-9da5-e4f91db28452\" (UID: \"e1a88675-1241-40f6-9da5-e4f91db28452\") " Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.082155 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a88675-1241-40f6-9da5-e4f91db28452-logs" (OuterVolumeSpecName: "logs") pod "e1a88675-1241-40f6-9da5-e4f91db28452" (UID: "e1a88675-1241-40f6-9da5-e4f91db28452"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.094724 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a88675-1241-40f6-9da5-e4f91db28452-kube-api-access-t6fsz" (OuterVolumeSpecName: "kube-api-access-t6fsz") pod "e1a88675-1241-40f6-9da5-e4f91db28452" (UID: "e1a88675-1241-40f6-9da5-e4f91db28452"). InnerVolumeSpecName "kube-api-access-t6fsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.143902 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1a88675-1241-40f6-9da5-e4f91db28452" (UID: "e1a88675-1241-40f6-9da5-e4f91db28452"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.156765 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-config-data" (OuterVolumeSpecName: "config-data") pod "e1a88675-1241-40f6-9da5-e4f91db28452" (UID: "e1a88675-1241-40f6-9da5-e4f91db28452"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.178707 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.178749 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a88675-1241-40f6-9da5-e4f91db28452-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.178759 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a88675-1241-40f6-9da5-e4f91db28452-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.178770 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6fsz\" (UniqueName: \"kubernetes.io/projected/e1a88675-1241-40f6-9da5-e4f91db28452-kube-api-access-t6fsz\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.283102 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.330423 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.354920 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:29 crc kubenswrapper[4932]: E0321 09:21:29.355681 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-log" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.355719 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-log" Mar 21 09:21:29 crc kubenswrapper[4932]: E0321 09:21:29.355753 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-api" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.355761 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-api" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.356013 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-log" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.356066 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" containerName="nova-api-api" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.357491 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.373188 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.400842 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.400956 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5441e39-9e5e-442a-a498-b031f7e6f382-logs\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.401043 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-config-data\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.401108 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnzq\" (UniqueName: \"kubernetes.io/projected/c5441e39-9e5e-442a-a498-b031f7e6f382-kube-api-access-4dnzq\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.408806 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.425582 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.509206 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-scripts\") pod \"98d55e78-bebc-4aa9-9043-a107dce766ab\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.509470 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-config-data\") pod \"98d55e78-bebc-4aa9-9043-a107dce766ab\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.509506 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-combined-ca-bundle\") pod \"98d55e78-bebc-4aa9-9043-a107dce766ab\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.509547 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5266\" (UniqueName: \"kubernetes.io/projected/98d55e78-bebc-4aa9-9043-a107dce766ab-kube-api-access-r5266\") pod \"98d55e78-bebc-4aa9-9043-a107dce766ab\" (UID: \"98d55e78-bebc-4aa9-9043-a107dce766ab\") " Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.509720 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5441e39-9e5e-442a-a498-b031f7e6f382-logs\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.510860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-config-data\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.510924 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dnzq\" (UniqueName: \"kubernetes.io/projected/c5441e39-9e5e-442a-a498-b031f7e6f382-kube-api-access-4dnzq\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.511308 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.513011 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5441e39-9e5e-442a-a498-b031f7e6f382-logs\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.525273 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-scripts" (OuterVolumeSpecName: "scripts") pod "98d55e78-bebc-4aa9-9043-a107dce766ab" (UID: "98d55e78-bebc-4aa9-9043-a107dce766ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.526136 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d55e78-bebc-4aa9-9043-a107dce766ab-kube-api-access-r5266" (OuterVolumeSpecName: "kube-api-access-r5266") pod "98d55e78-bebc-4aa9-9043-a107dce766ab" (UID: "98d55e78-bebc-4aa9-9043-a107dce766ab"). InnerVolumeSpecName "kube-api-access-r5266". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.530243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.530803 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-config-data\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.535400 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dnzq\" (UniqueName: \"kubernetes.io/projected/c5441e39-9e5e-442a-a498-b031f7e6f382-kube-api-access-4dnzq\") pod \"nova-api-0\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.550005 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-config-data" (OuterVolumeSpecName: "config-data") pod "98d55e78-bebc-4aa9-9043-a107dce766ab" (UID: "98d55e78-bebc-4aa9-9043-a107dce766ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.569650 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98d55e78-bebc-4aa9-9043-a107dce766ab" (UID: "98d55e78-bebc-4aa9-9043-a107dce766ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.613049 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.613311 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.613423 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d55e78-bebc-4aa9-9043-a107dce766ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.613519 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5266\" (UniqueName: \"kubernetes.io/projected/98d55e78-bebc-4aa9-9043-a107dce766ab-kube-api-access-r5266\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.715397 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a88675-1241-40f6-9da5-e4f91db28452" path="/var/lib/kubelet/pods/e1a88675-1241-40f6-9da5-e4f91db28452/volumes" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.826648 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.909347 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-92xlk" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.909333 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-92xlk" event={"ID":"98d55e78-bebc-4aa9-9043-a107dce766ab","Type":"ContainerDied","Data":"e99d853dcb6ecb943df7dc329cff9e8e7f4ff728a49a1158474db10f63e12b3e"} Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.909535 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e99d853dcb6ecb943df7dc329cff9e8e7f4ff728a49a1158474db10f63e12b3e" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.918992 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerStarted","Data":"ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661"} Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.920689 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.943364 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd","Type":"ContainerStarted","Data":"a3daa065161374673b53c615c8a08e9ba54c5624298ea3c3a743985e48d1ff11"} Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.943488 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd","Type":"ContainerStarted","Data":"499dd71e04450c3c555959978de3d7f18fd0435b5c5cc2e2e80e0576cd1678cb"} Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.943502 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd","Type":"ContainerStarted","Data":"edc04088e43b742acca26bff456c8014439783e78326f615e0ccece19ad655b8"} Mar 21 09:21:29 crc kubenswrapper[4932]: I0321 09:21:29.958095 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.48014392 podStartE2EDuration="4.958064828s" podCreationTimestamp="2026-03-21 09:21:25 +0000 UTC" firstStartedPulling="2026-03-21 09:21:26.067755184 +0000 UTC m=+1389.662953453" lastFinishedPulling="2026-03-21 09:21:29.545676092 +0000 UTC m=+1393.140874361" observedRunningTime="2026-03-21 09:21:29.949353314 +0000 UTC m=+1393.544551613" watchObservedRunningTime="2026-03-21 09:21:29.958064828 +0000 UTC m=+1393.553263097" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.021159 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.021124476 podStartE2EDuration="3.021124476s" podCreationTimestamp="2026-03-21 09:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:29.98000965 +0000 UTC m=+1393.575207919" watchObservedRunningTime="2026-03-21 09:21:30.021124476 +0000 UTC m=+1393.616322745" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.040793 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 21 09:21:30 crc kubenswrapper[4932]: E0321 09:21:30.041319 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d55e78-bebc-4aa9-9043-a107dce766ab" containerName="nova-cell1-conductor-db-sync" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.041334 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d55e78-bebc-4aa9-9043-a107dce766ab" containerName="nova-cell1-conductor-db-sync" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.041551 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d55e78-bebc-4aa9-9043-a107dce766ab" containerName="nova-cell1-conductor-db-sync" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.042320 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.045210 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.051330 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.126187 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8ebe60-2636-4f10-84b9-4f9056ee3323-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.126622 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a8ebe60-2636-4f10-84b9-4f9056ee3323-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.126661 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfngg\" (UniqueName: \"kubernetes.io/projected/4a8ebe60-2636-4f10-84b9-4f9056ee3323-kube-api-access-jfngg\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.225756 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.225817 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.230157 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a8ebe60-2636-4f10-84b9-4f9056ee3323-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.230256 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfngg\" (UniqueName: \"kubernetes.io/projected/4a8ebe60-2636-4f10-84b9-4f9056ee3323-kube-api-access-jfngg\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.230560 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8ebe60-2636-4f10-84b9-4f9056ee3323-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.236996 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a8ebe60-2636-4f10-84b9-4f9056ee3323-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.238835 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a8ebe60-2636-4f10-84b9-4f9056ee3323-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.250108 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfngg\" (UniqueName: \"kubernetes.io/projected/4a8ebe60-2636-4f10-84b9-4f9056ee3323-kube-api-access-jfngg\") pod \"nova-cell1-conductor-0\" (UID: \"4a8ebe60-2636-4f10-84b9-4f9056ee3323\") " pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: W0321 09:21:30.354825 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5441e39_9e5e_442a_a498_b031f7e6f382.slice/crio-e150059ba31b0d1ed4e779bd3443bfe63a28e141ac7eee71960aa45cb2e198bd WatchSource:0}: Error finding container e150059ba31b0d1ed4e779bd3443bfe63a28e141ac7eee71960aa45cb2e198bd: Status 404 returned error can't find the container with id e150059ba31b0d1ed4e779bd3443bfe63a28e141ac7eee71960aa45cb2e198bd Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.356001 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.380961 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:30 crc kubenswrapper[4932]: W0321 09:21:30.880878 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a8ebe60_2636_4f10_84b9_4f9056ee3323.slice/crio-f50ee6d7ace2f4a0a769983f1d16b1b125d314b6b6713ab4ff545deb83a671a5 WatchSource:0}: Error finding container f50ee6d7ace2f4a0a769983f1d16b1b125d314b6b6713ab4ff545deb83a671a5: Status 404 returned error can't find the container with id f50ee6d7ace2f4a0a769983f1d16b1b125d314b6b6713ab4ff545deb83a671a5 Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.881730 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.981894 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c5441e39-9e5e-442a-a498-b031f7e6f382","Type":"ContainerStarted","Data":"e221a0366a8e894e1ba2ae131f0a3db151cf9b247f6a4f2d85a0a3744d6cfadb"} Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.981934 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c5441e39-9e5e-442a-a498-b031f7e6f382","Type":"ContainerStarted","Data":"595e1b6f8ef08ae4d4627dbc495b1c3e148e64d87dbaf33968157aca1f03a9d8"} Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.981944 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c5441e39-9e5e-442a-a498-b031f7e6f382","Type":"ContainerStarted","Data":"e150059ba31b0d1ed4e779bd3443bfe63a28e141ac7eee71960aa45cb2e198bd"} Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.983982 4932 generic.go:334] "Generic (PLEG): container finished" podID="f9e577fb-4510-4fd9-8560-1fa627d3f94c" containerID="cd016e85f6a690bd4fc3b121d848a467b61ea5ad4cea20cb47a7294111e5da92" exitCode=0 Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.984050 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f9e577fb-4510-4fd9-8560-1fa627d3f94c","Type":"ContainerDied","Data":"cd016e85f6a690bd4fc3b121d848a467b61ea5ad4cea20cb47a7294111e5da92"} Mar 21 09:21:30 crc kubenswrapper[4932]: I0321 09:21:30.986251 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4a8ebe60-2636-4f10-84b9-4f9056ee3323","Type":"ContainerStarted","Data":"f50ee6d7ace2f4a0a769983f1d16b1b125d314b6b6713ab4ff545deb83a671a5"} Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.007212 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.007187972 podStartE2EDuration="2.007187972s" podCreationTimestamp="2026-03-21 09:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:31.001649287 +0000 UTC m=+1394.596847556" watchObservedRunningTime="2026-03-21 09:21:31.007187972 +0000 UTC m=+1394.602386241" Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.044538 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.064608 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfsjq\" (UniqueName: \"kubernetes.io/projected/f9e577fb-4510-4fd9-8560-1fa627d3f94c-kube-api-access-zfsjq\") pod \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.065098 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-combined-ca-bundle\") pod \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.068916 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-config-data\") pod \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\" (UID: \"f9e577fb-4510-4fd9-8560-1fa627d3f94c\") " Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.101577 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e577fb-4510-4fd9-8560-1fa627d3f94c-kube-api-access-zfsjq" (OuterVolumeSpecName: "kube-api-access-zfsjq") pod "f9e577fb-4510-4fd9-8560-1fa627d3f94c" (UID: "f9e577fb-4510-4fd9-8560-1fa627d3f94c"). InnerVolumeSpecName "kube-api-access-zfsjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.127669 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-config-data" (OuterVolumeSpecName: "config-data") pod "f9e577fb-4510-4fd9-8560-1fa627d3f94c" (UID: "f9e577fb-4510-4fd9-8560-1fa627d3f94c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.146705 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9e577fb-4510-4fd9-8560-1fa627d3f94c" (UID: "f9e577fb-4510-4fd9-8560-1fa627d3f94c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.174654 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfsjq\" (UniqueName: \"kubernetes.io/projected/f9e577fb-4510-4fd9-8560-1fa627d3f94c-kube-api-access-zfsjq\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.174687 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:31 crc kubenswrapper[4932]: I0321 09:21:31.174697 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e577fb-4510-4fd9-8560-1fa627d3f94c-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.002803 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.002799 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f9e577fb-4510-4fd9-8560-1fa627d3f94c","Type":"ContainerDied","Data":"cccca58e1b671a67a1770a461dea6757edd68e1d56f3d58f2f514ad5c21af398"} Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.003488 4932 scope.go:117] "RemoveContainer" containerID="cd016e85f6a690bd4fc3b121d848a467b61ea5ad4cea20cb47a7294111e5da92" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.007505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4a8ebe60-2636-4f10-84b9-4f9056ee3323","Type":"ContainerStarted","Data":"1d43dd9cdb8f549f6518f963910edeaca9795622cd3a84c77ca0c9e28eeca349"} Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.007583 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.039774 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.039754724 podStartE2EDuration="2.039754724s" podCreationTimestamp="2026-03-21 09:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:32.033092574 +0000 UTC m=+1395.628290843" watchObservedRunningTime="2026-03-21 09:21:32.039754724 +0000 UTC m=+1395.634952993" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.086226 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.116372 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.130195 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:32 crc kubenswrapper[4932]: E0321 09:21:32.130857 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e577fb-4510-4fd9-8560-1fa627d3f94c" containerName="nova-scheduler-scheduler" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.130883 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e577fb-4510-4fd9-8560-1fa627d3f94c" containerName="nova-scheduler-scheduler" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.131173 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e577fb-4510-4fd9-8560-1fa627d3f94c" containerName="nova-scheduler-scheduler" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.132105 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.135222 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.146122 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.199882 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-config-data\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.199964 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgkj\" (UniqueName: \"kubernetes.io/projected/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-kube-api-access-rfgkj\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.200434 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.302252 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.302595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-config-data\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.302656 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgkj\" (UniqueName: \"kubernetes.io/projected/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-kube-api-access-rfgkj\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.310336 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-config-data\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.311251 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.321667 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgkj\" (UniqueName: \"kubernetes.io/projected/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-kube-api-access-rfgkj\") pod \"nova-scheduler-0\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.453127 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:21:32 crc kubenswrapper[4932]: I0321 09:21:32.960180 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:21:33 crc kubenswrapper[4932]: I0321 09:21:33.022155 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdd3e3c2-b658-406c-a87b-c531aa3b5fee","Type":"ContainerStarted","Data":"3260e8f673146e9795908d33760bcb4997b4cfe335b74fe8b8ddd7b38770e205"} Mar 21 09:21:33 crc kubenswrapper[4932]: I0321 09:21:33.716269 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e577fb-4510-4fd9-8560-1fa627d3f94c" path="/var/lib/kubelet/pods/f9e577fb-4510-4fd9-8560-1fa627d3f94c/volumes" Mar 21 09:21:34 crc kubenswrapper[4932]: I0321 09:21:34.035181 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdd3e3c2-b658-406c-a87b-c531aa3b5fee","Type":"ContainerStarted","Data":"8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300"} Mar 21 09:21:34 crc kubenswrapper[4932]: I0321 09:21:34.050994 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.050974968 podStartE2EDuration="2.050974968s" podCreationTimestamp="2026-03-21 09:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:34.050624237 +0000 UTC m=+1397.645822516" watchObservedRunningTime="2026-03-21 09:21:34.050974968 +0000 UTC m=+1397.646173237" Mar 21 09:21:35 crc kubenswrapper[4932]: I0321 09:21:35.703165 4932 scope.go:117] "RemoveContainer" containerID="9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07" Mar 21 09:21:35 crc kubenswrapper[4932]: E0321 09:21:35.703776 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:21:37 crc kubenswrapper[4932]: I0321 09:21:37.454011 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 21 09:21:37 crc kubenswrapper[4932]: I0321 09:21:37.615558 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 21 09:21:37 crc kubenswrapper[4932]: I0321 09:21:37.615611 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 21 09:21:37 crc kubenswrapper[4932]: I0321 09:21:37.711795 4932 scope.go:117] "RemoveContainer" containerID="0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47" Mar 21 09:21:38 crc kubenswrapper[4932]: I0321 09:21:38.077764 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78"} Mar 21 09:21:38 crc kubenswrapper[4932]: I0321 09:21:38.627549 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 09:21:38 crc kubenswrapper[4932]: I0321 09:21:38.627957 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 09:21:39 crc kubenswrapper[4932]: I0321 09:21:39.827553 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 21 09:21:39 crc kubenswrapper[4932]: I0321 09:21:39.827938 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 21 09:21:40 crc kubenswrapper[4932]: I0321 09:21:40.415967 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 21 09:21:40 crc kubenswrapper[4932]: I0321 09:21:40.910549 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 09:21:40 crc kubenswrapper[4932]: I0321 09:21:40.910693 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 09:21:42 crc kubenswrapper[4932]: I0321 09:21:42.453937 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 21 09:21:42 crc kubenswrapper[4932]: I0321 09:21:42.487792 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 21 09:21:43 crc kubenswrapper[4932]: I0321 09:21:43.160673 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 21 09:21:45 crc kubenswrapper[4932]: I0321 09:21:45.616064 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 21 09:21:45 crc kubenswrapper[4932]: I0321 09:21:45.617440 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 21 09:21:46 crc kubenswrapper[4932]: I0321 09:21:46.703030 4932 scope.go:117] "RemoveContainer" containerID="9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.168278 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520"} Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.171641 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" exitCode=1 Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.171687 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78"} Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.171738 4932 scope.go:117] "RemoveContainer" containerID="0869e3baee25054f52dfc2dfe9be6d3e5f98d245378f65a5fa3dcc7aa1543c47" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.172199 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:21:47 crc kubenswrapper[4932]: E0321 09:21:47.172482 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.623677 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.631321 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.640038 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.740924 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.741021 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.742544 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.742617 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.827670 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.827815 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.948641 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:21:47 crc kubenswrapper[4932]: I0321 09:21:47.948691 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:21:48 crc kubenswrapper[4932]: I0321 09:21:48.188728 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:21:48 crc kubenswrapper[4932]: E0321 09:21:48.189337 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:21:48 crc kubenswrapper[4932]: I0321 09:21:48.197716 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.173191 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.208999 4932 generic.go:334] "Generic (PLEG): container finished" podID="a4d25fc6-974f-4695-8d73-2783af6957f5" containerID="8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68" exitCode=137 Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.210317 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:21:49 crc kubenswrapper[4932]: E0321 09:21:49.210801 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.210859 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.211019 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4d25fc6-974f-4695-8d73-2783af6957f5","Type":"ContainerDied","Data":"8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68"} Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.211052 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4d25fc6-974f-4695-8d73-2783af6957f5","Type":"ContainerDied","Data":"ab8113c13ae044f001d2099734ece2322b791d4594d4cc52a33138c0ee34da06"} Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.211076 4932 scope.go:117] "RemoveContainer" containerID="8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.250012 4932 scope.go:117] "RemoveContainer" containerID="8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68" Mar 21 09:21:49 crc kubenswrapper[4932]: E0321 09:21:49.250402 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68\": container with ID starting with 8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68 not found: ID does not exist" containerID="8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.250462 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68"} err="failed to get container status \"8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68\": rpc error: code = NotFound desc = could not find container \"8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68\": container with ID starting with 8a3700df0cc571f7cb428a503e0f7deec13f10402eb0a27765039e3eea2e0f68 not found: ID does not exist" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.272283 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fkz6\" (UniqueName: \"kubernetes.io/projected/a4d25fc6-974f-4695-8d73-2783af6957f5-kube-api-access-9fkz6\") pod \"a4d25fc6-974f-4695-8d73-2783af6957f5\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.272405 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-config-data\") pod \"a4d25fc6-974f-4695-8d73-2783af6957f5\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.272455 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-combined-ca-bundle\") pod \"a4d25fc6-974f-4695-8d73-2783af6957f5\" (UID: \"a4d25fc6-974f-4695-8d73-2783af6957f5\") " Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.278567 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d25fc6-974f-4695-8d73-2783af6957f5-kube-api-access-9fkz6" (OuterVolumeSpecName: "kube-api-access-9fkz6") pod "a4d25fc6-974f-4695-8d73-2783af6957f5" (UID: "a4d25fc6-974f-4695-8d73-2783af6957f5"). InnerVolumeSpecName "kube-api-access-9fkz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.307490 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4d25fc6-974f-4695-8d73-2783af6957f5" (UID: "a4d25fc6-974f-4695-8d73-2783af6957f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.308062 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-config-data" (OuterVolumeSpecName: "config-data") pod "a4d25fc6-974f-4695-8d73-2783af6957f5" (UID: "a4d25fc6-974f-4695-8d73-2783af6957f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.375022 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fkz6\" (UniqueName: \"kubernetes.io/projected/a4d25fc6-974f-4695-8d73-2783af6957f5-kube-api-access-9fkz6\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.375061 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.375074 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d25fc6-974f-4695-8d73-2783af6957f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.545431 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.557493 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.571363 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:49 crc kubenswrapper[4932]: E0321 09:21:49.572043 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d25fc6-974f-4695-8d73-2783af6957f5" containerName="nova-cell1-novncproxy-novncproxy" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.572110 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d25fc6-974f-4695-8d73-2783af6957f5" containerName="nova-cell1-novncproxy-novncproxy" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.572386 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d25fc6-974f-4695-8d73-2783af6957f5" containerName="nova-cell1-novncproxy-novncproxy" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.573229 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.578768 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.579289 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.579787 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.586674 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.681562 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6td7\" (UniqueName: \"kubernetes.io/projected/489c9eb2-53f2-4e34-828f-4e294caa705e-kube-api-access-w6td7\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.681724 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.681751 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.681775 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.681793 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.713515 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d25fc6-974f-4695-8d73-2783af6957f5" path="/var/lib/kubelet/pods/a4d25fc6-974f-4695-8d73-2783af6957f5/volumes" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.783840 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.783892 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.783919 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.783939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.784584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6td7\" (UniqueName: \"kubernetes.io/projected/489c9eb2-53f2-4e34-828f-4e294caa705e-kube-api-access-w6td7\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.787509 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.787674 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.789288 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.800145 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/489c9eb2-53f2-4e34-828f-4e294caa705e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.816124 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6td7\" (UniqueName: \"kubernetes.io/projected/489c9eb2-53f2-4e34-828f-4e294caa705e-kube-api-access-w6td7\") pod \"nova-cell1-novncproxy-0\" (UID: \"489c9eb2-53f2-4e34-828f-4e294caa705e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.834670 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.835777 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.841415 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 21 09:21:49 crc kubenswrapper[4932]: I0321 09:21:49.911848 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.229549 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.427743 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-754c945467-b2nqr"] Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.434005 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.444415 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 21 09:21:50 crc kubenswrapper[4932]: W0321 09:21:50.462741 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod489c9eb2_53f2_4e34_828f_4e294caa705e.slice/crio-25d8802b609d68aa9f32a94beaabd6edec95f25e62f9a90361e69cb2390a2a9e WatchSource:0}: Error finding container 25d8802b609d68aa9f32a94beaabd6edec95f25e62f9a90361e69cb2390a2a9e: Status 404 returned error can't find the container with id 25d8802b609d68aa9f32a94beaabd6edec95f25e62f9a90361e69cb2390a2a9e Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.472806 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-754c945467-b2nqr"] Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.501613 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-config\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.502073 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-ovsdbserver-nb\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.502246 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-dns-svc\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.502383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-ovsdbserver-sb\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.502472 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-dns-swift-storage-0\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.502508 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zn4\" (UniqueName: \"kubernetes.io/projected/b458937d-c892-47ea-ac33-6aa0eb244b60-kube-api-access-69zn4\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.603909 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-dns-svc\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.603999 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-ovsdbserver-sb\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.604057 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-dns-swift-storage-0\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.604077 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69zn4\" (UniqueName: \"kubernetes.io/projected/b458937d-c892-47ea-ac33-6aa0eb244b60-kube-api-access-69zn4\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.604135 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-config\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.604161 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-ovsdbserver-nb\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.606206 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-ovsdbserver-nb\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.606508 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-dns-svc\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.606531 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-dns-swift-storage-0\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.607722 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-ovsdbserver-sb\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.607795 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b458937d-c892-47ea-ac33-6aa0eb244b60-config\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.628066 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69zn4\" (UniqueName: \"kubernetes.io/projected/b458937d-c892-47ea-ac33-6aa0eb244b60-kube-api-access-69zn4\") pod \"dnsmasq-dns-754c945467-b2nqr\" (UID: \"b458937d-c892-47ea-ac33-6aa0eb244b60\") " pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:50 crc kubenswrapper[4932]: I0321 09:21:50.836251 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:51 crc kubenswrapper[4932]: I0321 09:21:51.231841 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"489c9eb2-53f2-4e34-828f-4e294caa705e","Type":"ContainerStarted","Data":"1daaa649925b0c76ec6cb83dfec4fb8646364d8c5acd98ce5422108febbd10ad"} Mar 21 09:21:51 crc kubenswrapper[4932]: I0321 09:21:51.232205 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"489c9eb2-53f2-4e34-828f-4e294caa705e","Type":"ContainerStarted","Data":"25d8802b609d68aa9f32a94beaabd6edec95f25e62f9a90361e69cb2390a2a9e"} Mar 21 09:21:51 crc kubenswrapper[4932]: I0321 09:21:51.251662 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.251639115 podStartE2EDuration="2.251639115s" podCreationTimestamp="2026-03-21 09:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:51.251024535 +0000 UTC m=+1414.846222824" watchObservedRunningTime="2026-03-21 09:21:51.251639115 +0000 UTC m=+1414.846837384" Mar 21 09:21:51 crc kubenswrapper[4932]: W0321 09:21:51.360561 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb458937d_c892_47ea_ac33_6aa0eb244b60.slice/crio-72f372fa754b566bb7092506ee10056207b3c1bd8331903f288735a5355c7b93 WatchSource:0}: Error finding container 72f372fa754b566bb7092506ee10056207b3c1bd8331903f288735a5355c7b93: Status 404 returned error can't find the container with id 72f372fa754b566bb7092506ee10056207b3c1bd8331903f288735a5355c7b93 Mar 21 09:21:51 crc kubenswrapper[4932]: I0321 09:21:51.365873 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-754c945467-b2nqr"] Mar 21 09:21:52 crc kubenswrapper[4932]: I0321 09:21:52.242534 4932 generic.go:334] "Generic (PLEG): container finished" podID="b458937d-c892-47ea-ac33-6aa0eb244b60" containerID="0b30afae4e0cd47050bf282e969c3ea11949f2b54f170bcb63ac1ef31884149c" exitCode=0 Mar 21 09:21:52 crc kubenswrapper[4932]: I0321 09:21:52.242630 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754c945467-b2nqr" event={"ID":"b458937d-c892-47ea-ac33-6aa0eb244b60","Type":"ContainerDied","Data":"0b30afae4e0cd47050bf282e969c3ea11949f2b54f170bcb63ac1ef31884149c"} Mar 21 09:21:52 crc kubenswrapper[4932]: I0321 09:21:52.242880 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754c945467-b2nqr" event={"ID":"b458937d-c892-47ea-ac33-6aa0eb244b60","Type":"ContainerStarted","Data":"72f372fa754b566bb7092506ee10056207b3c1bd8331903f288735a5355c7b93"} Mar 21 09:21:52 crc kubenswrapper[4932]: I0321 09:21:52.984838 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:52 crc kubenswrapper[4932]: I0321 09:21:52.985486 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="ceilometer-central-agent" containerID="cri-o://9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852" gracePeriod=30 Mar 21 09:21:52 crc kubenswrapper[4932]: I0321 09:21:52.985580 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="sg-core" containerID="cri-o://fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51" gracePeriod=30 Mar 21 09:21:52 crc kubenswrapper[4932]: I0321 09:21:52.985669 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="proxy-httpd" containerID="cri-o://ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661" gracePeriod=30 Mar 21 09:21:52 crc kubenswrapper[4932]: I0321 09:21:52.985652 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="ceilometer-notification-agent" containerID="cri-o://258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077" gracePeriod=30 Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.009852 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.254707 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-754c945467-b2nqr" event={"ID":"b458937d-c892-47ea-ac33-6aa0eb244b60","Type":"ContainerStarted","Data":"784feed5b826339a4b5dd83e20d2de0e32fce8027ad132ae461525b96d34c3e7"} Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.254848 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.259879 4932 generic.go:334] "Generic (PLEG): container finished" podID="2962ecba-b083-4132-bd2c-eac94506c576" containerID="ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661" exitCode=0 Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.259909 4932 generic.go:334] "Generic (PLEG): container finished" podID="2962ecba-b083-4132-bd2c-eac94506c576" containerID="fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51" exitCode=2 Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.259929 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerDied","Data":"ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661"} Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.259948 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerDied","Data":"fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51"} Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.280307 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-754c945467-b2nqr" podStartSLOduration=3.28028064 podStartE2EDuration="3.28028064s" podCreationTimestamp="2026-03-21 09:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:53.274125306 +0000 UTC m=+1416.869323575" watchObservedRunningTime="2026-03-21 09:21:53.28028064 +0000 UTC m=+1416.875478909" Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.452532 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.454773 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-log" containerID="cri-o://595e1b6f8ef08ae4d4627dbc495b1c3e148e64d87dbaf33968157aca1f03a9d8" gracePeriod=30 Mar 21 09:21:53 crc kubenswrapper[4932]: I0321 09:21:53.454953 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-api" containerID="cri-o://e221a0366a8e894e1ba2ae131f0a3db151cf9b247f6a4f2d85a0a3744d6cfadb" gracePeriod=30 Mar 21 09:21:54 crc kubenswrapper[4932]: I0321 09:21:54.274982 4932 generic.go:334] "Generic (PLEG): container finished" podID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerID="595e1b6f8ef08ae4d4627dbc495b1c3e148e64d87dbaf33968157aca1f03a9d8" exitCode=143 Mar 21 09:21:54 crc kubenswrapper[4932]: I0321 09:21:54.275047 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c5441e39-9e5e-442a-a498-b031f7e6f382","Type":"ContainerDied","Data":"595e1b6f8ef08ae4d4627dbc495b1c3e148e64d87dbaf33968157aca1f03a9d8"} Mar 21 09:21:54 crc kubenswrapper[4932]: I0321 09:21:54.279100 4932 generic.go:334] "Generic (PLEG): container finished" podID="2962ecba-b083-4132-bd2c-eac94506c576" containerID="9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852" exitCode=0 Mar 21 09:21:54 crc kubenswrapper[4932]: I0321 09:21:54.279144 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerDied","Data":"9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852"} Mar 21 09:21:54 crc kubenswrapper[4932]: I0321 09:21:54.912120 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.326167 4932 generic.go:334] "Generic (PLEG): container finished" podID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerID="e221a0366a8e894e1ba2ae131f0a3db151cf9b247f6a4f2d85a0a3744d6cfadb" exitCode=0 Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.326262 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c5441e39-9e5e-442a-a498-b031f7e6f382","Type":"ContainerDied","Data":"e221a0366a8e894e1ba2ae131f0a3db151cf9b247f6a4f2d85a0a3744d6cfadb"} Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.328037 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.351940 4932 generic.go:334] "Generic (PLEG): container finished" podID="2962ecba-b083-4132-bd2c-eac94506c576" containerID="258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077" exitCode=0 Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.352218 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerDied","Data":"258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077"} Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.352299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2962ecba-b083-4132-bd2c-eac94506c576","Type":"ContainerDied","Data":"dfe3f8bbb4d0fb2f6b84b51fc7d98c7c469a2a1c3d53e2bdc1608638083dcf68"} Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.352326 4932 scope.go:117] "RemoveContainer" containerID="ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.405858 4932 scope.go:117] "RemoveContainer" containerID="fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.423835 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-sg-core-conf-yaml\") pod \"2962ecba-b083-4132-bd2c-eac94506c576\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.424210 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-scripts\") pod \"2962ecba-b083-4132-bd2c-eac94506c576\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.424249 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-combined-ca-bundle\") pod \"2962ecba-b083-4132-bd2c-eac94506c576\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.424286 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-log-httpd\") pod \"2962ecba-b083-4132-bd2c-eac94506c576\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.424319 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsczm\" (UniqueName: \"kubernetes.io/projected/2962ecba-b083-4132-bd2c-eac94506c576-kube-api-access-tsczm\") pod \"2962ecba-b083-4132-bd2c-eac94506c576\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.424392 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-config-data\") pod \"2962ecba-b083-4132-bd2c-eac94506c576\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.424469 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-run-httpd\") pod \"2962ecba-b083-4132-bd2c-eac94506c576\" (UID: \"2962ecba-b083-4132-bd2c-eac94506c576\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.425302 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2962ecba-b083-4132-bd2c-eac94506c576" (UID: "2962ecba-b083-4132-bd2c-eac94506c576"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.425815 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2962ecba-b083-4132-bd2c-eac94506c576" (UID: "2962ecba-b083-4132-bd2c-eac94506c576"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.449917 4932 scope.go:117] "RemoveContainer" containerID="258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.451983 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-scripts" (OuterVolumeSpecName: "scripts") pod "2962ecba-b083-4132-bd2c-eac94506c576" (UID: "2962ecba-b083-4132-bd2c-eac94506c576"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.452107 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2962ecba-b083-4132-bd2c-eac94506c576-kube-api-access-tsczm" (OuterVolumeSpecName: "kube-api-access-tsczm") pod "2962ecba-b083-4132-bd2c-eac94506c576" (UID: "2962ecba-b083-4132-bd2c-eac94506c576"). InnerVolumeSpecName "kube-api-access-tsczm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.502216 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2962ecba-b083-4132-bd2c-eac94506c576" (UID: "2962ecba-b083-4132-bd2c-eac94506c576"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.516220 4932 scope.go:117] "RemoveContainer" containerID="9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.529998 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.530031 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.530042 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsczm\" (UniqueName: \"kubernetes.io/projected/2962ecba-b083-4132-bd2c-eac94506c576-kube-api-access-tsczm\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.530052 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2962ecba-b083-4132-bd2c-eac94506c576-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.530061 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.544202 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.545802 4932 scope.go:117] "RemoveContainer" containerID="ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661" Mar 21 09:21:55 crc kubenswrapper[4932]: E0321 09:21:55.546180 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661\": container with ID starting with ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661 not found: ID does not exist" containerID="ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.546236 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661"} err="failed to get container status \"ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661\": rpc error: code = NotFound desc = could not find container \"ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661\": container with ID starting with ee707408666fd345ab91e4b891032c46be1057c6e3ff5e0976fce199c5308661 not found: ID does not exist" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.546266 4932 scope.go:117] "RemoveContainer" containerID="fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51" Mar 21 09:21:55 crc kubenswrapper[4932]: E0321 09:21:55.546654 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51\": container with ID starting with fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51 not found: ID does not exist" containerID="fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.546698 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51"} err="failed to get container status \"fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51\": rpc error: code = NotFound desc = could not find container \"fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51\": container with ID starting with fee3ea4efdfc6dd101a13e51e76e0ca465bf2f61cecd3173e4dafe4687582b51 not found: ID does not exist" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.546730 4932 scope.go:117] "RemoveContainer" containerID="258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077" Mar 21 09:21:55 crc kubenswrapper[4932]: E0321 09:21:55.547066 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077\": container with ID starting with 258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077 not found: ID does not exist" containerID="258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.547094 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077"} err="failed to get container status \"258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077\": rpc error: code = NotFound desc = could not find container \"258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077\": container with ID starting with 258ae8e729787fa52bfc7525243f328b634351d86181a425e70561c4cbbc4077 not found: ID does not exist" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.547111 4932 scope.go:117] "RemoveContainer" containerID="9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852" Mar 21 09:21:55 crc kubenswrapper[4932]: E0321 09:21:55.547435 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852\": container with ID starting with 9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852 not found: ID does not exist" containerID="9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.547487 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852"} err="failed to get container status \"9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852\": rpc error: code = NotFound desc = could not find container \"9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852\": container with ID starting with 9907f4ecf7eb6e1015acc40e3c1e5a506524ec690de4e12837ef2a3259c80852 not found: ID does not exist" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.575033 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2962ecba-b083-4132-bd2c-eac94506c576" (UID: "2962ecba-b083-4132-bd2c-eac94506c576"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.627758 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-config-data" (OuterVolumeSpecName: "config-data") pod "2962ecba-b083-4132-bd2c-eac94506c576" (UID: "2962ecba-b083-4132-bd2c-eac94506c576"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.631454 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dnzq\" (UniqueName: \"kubernetes.io/projected/c5441e39-9e5e-442a-a498-b031f7e6f382-kube-api-access-4dnzq\") pod \"c5441e39-9e5e-442a-a498-b031f7e6f382\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.631839 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5441e39-9e5e-442a-a498-b031f7e6f382-logs\") pod \"c5441e39-9e5e-442a-a498-b031f7e6f382\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.631919 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-config-data\") pod \"c5441e39-9e5e-442a-a498-b031f7e6f382\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.632110 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-combined-ca-bundle\") pod \"c5441e39-9e5e-442a-a498-b031f7e6f382\" (UID: \"c5441e39-9e5e-442a-a498-b031f7e6f382\") " Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.633126 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.633152 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2962ecba-b083-4132-bd2c-eac94506c576-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.633676 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5441e39-9e5e-442a-a498-b031f7e6f382-logs" (OuterVolumeSpecName: "logs") pod "c5441e39-9e5e-442a-a498-b031f7e6f382" (UID: "c5441e39-9e5e-442a-a498-b031f7e6f382"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.638225 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5441e39-9e5e-442a-a498-b031f7e6f382-kube-api-access-4dnzq" (OuterVolumeSpecName: "kube-api-access-4dnzq") pod "c5441e39-9e5e-442a-a498-b031f7e6f382" (UID: "c5441e39-9e5e-442a-a498-b031f7e6f382"). InnerVolumeSpecName "kube-api-access-4dnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.666042 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-config-data" (OuterVolumeSpecName: "config-data") pod "c5441e39-9e5e-442a-a498-b031f7e6f382" (UID: "c5441e39-9e5e-442a-a498-b031f7e6f382"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.670804 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5441e39-9e5e-442a-a498-b031f7e6f382" (UID: "c5441e39-9e5e-442a-a498-b031f7e6f382"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.737022 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dnzq\" (UniqueName: \"kubernetes.io/projected/c5441e39-9e5e-442a-a498-b031f7e6f382-kube-api-access-4dnzq\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.737067 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5441e39-9e5e-442a-a498-b031f7e6f382-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.737080 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:55 crc kubenswrapper[4932]: I0321 09:21:55.737092 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5441e39-9e5e-442a-a498-b031f7e6f382-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.363381 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c5441e39-9e5e-442a-a498-b031f7e6f382","Type":"ContainerDied","Data":"e150059ba31b0d1ed4e779bd3443bfe63a28e141ac7eee71960aa45cb2e198bd"} Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.363450 4932 scope.go:117] "RemoveContainer" containerID="e221a0366a8e894e1ba2ae131f0a3db151cf9b247f6a4f2d85a0a3744d6cfadb" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.363341 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.366933 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.388226 4932 scope.go:117] "RemoveContainer" containerID="595e1b6f8ef08ae4d4627dbc495b1c3e148e64d87dbaf33968157aca1f03a9d8" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.397913 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.415398 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.442707 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.458931 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.467564 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:56 crc kubenswrapper[4932]: E0321 09:21:56.468202 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="ceilometer-central-agent" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468219 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="ceilometer-central-agent" Mar 21 09:21:56 crc kubenswrapper[4932]: E0321 09:21:56.468240 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="ceilometer-notification-agent" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468247 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="ceilometer-notification-agent" Mar 21 09:21:56 crc kubenswrapper[4932]: E0321 09:21:56.468257 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="proxy-httpd" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468268 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="proxy-httpd" Mar 21 09:21:56 crc kubenswrapper[4932]: E0321 09:21:56.468287 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-api" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468295 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-api" Mar 21 09:21:56 crc kubenswrapper[4932]: E0321 09:21:56.468317 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-log" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468324 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-log" Mar 21 09:21:56 crc kubenswrapper[4932]: E0321 09:21:56.468337 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="sg-core" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468345 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="sg-core" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468624 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="sg-core" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468641 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-api" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468653 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="ceilometer-central-agent" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468668 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" containerName="nova-api-log" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468680 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="ceilometer-notification-agent" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.468699 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2962ecba-b083-4132-bd2c-eac94506c576" containerName="proxy-httpd" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.470068 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.472523 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.472940 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.473492 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.478616 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.481262 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.483988 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.484778 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.491598 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.502244 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554013 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554066 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkj8r\" (UniqueName: \"kubernetes.io/projected/8b5a50d4-6331-449e-b615-fb8645e6974c-kube-api-access-wkj8r\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554096 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-config-data\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554116 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-scripts\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554159 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554239 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-run-httpd\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554261 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-log-httpd\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554283 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-config-data\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554304 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5a50d4-6331-449e-b615-fb8645e6974c-logs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554321 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p5t5\" (UniqueName: \"kubernetes.io/projected/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-kube-api-access-6p5t5\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554403 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.554431 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.656682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.656781 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.656831 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-run-httpd\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.656867 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-log-httpd\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.656909 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-config-data\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.656940 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5a50d4-6331-449e-b615-fb8645e6974c-logs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.656962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p5t5\" (UniqueName: \"kubernetes.io/projected/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-kube-api-access-6p5t5\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.657007 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.657038 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.657118 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.657162 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkj8r\" (UniqueName: \"kubernetes.io/projected/8b5a50d4-6331-449e-b615-fb8645e6974c-kube-api-access-wkj8r\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.657191 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-config-data\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.657209 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-scripts\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.657757 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5a50d4-6331-449e-b615-fb8645e6974c-logs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.657897 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-log-httpd\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.658003 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-run-httpd\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.661225 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.662183 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.663000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.663376 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-scripts\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.665063 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.668187 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-config-data\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.674134 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.678906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-config-data\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.680987 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkj8r\" (UniqueName: \"kubernetes.io/projected/8b5a50d4-6331-449e-b615-fb8645e6974c-kube-api-access-wkj8r\") pod \"nova-api-0\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.681138 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p5t5\" (UniqueName: \"kubernetes.io/projected/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-kube-api-access-6p5t5\") pod \"ceilometer-0\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " pod="openstack/ceilometer-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.799762 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:21:56 crc kubenswrapper[4932]: I0321 09:21:56.820938 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.311727 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.383225 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5a50d4-6331-449e-b615-fb8645e6974c","Type":"ContainerStarted","Data":"fa2c54fccc53d5595c4f6eb7b2c9e727230d4d817b9e2d9e2c8fe850c38b6e7c"} Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.396767 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" exitCode=1 Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.396822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520"} Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.396868 4932 scope.go:117] "RemoveContainer" containerID="9ac45c3831f11a0fed5871c6734d6b0552462f0a35be2204a7039bd348030b07" Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.398779 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:21:57 crc kubenswrapper[4932]: E0321 09:21:57.399632 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.410419 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:21:57 crc kubenswrapper[4932]: W0321 09:21:57.448001 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71b66984_4a99_4eb0_bf02_a9eb0c99f6b6.slice/crio-eab57ed00d9c1cec70151569f5572b52012bd26a921dbedfe131f8f027d515a5 WatchSource:0}: Error finding container eab57ed00d9c1cec70151569f5572b52012bd26a921dbedfe131f8f027d515a5: Status 404 returned error can't find the container with id eab57ed00d9c1cec70151569f5572b52012bd26a921dbedfe131f8f027d515a5 Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.722127 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2962ecba-b083-4132-bd2c-eac94506c576" path="/var/lib/kubelet/pods/2962ecba-b083-4132-bd2c-eac94506c576/volumes" Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.724098 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5441e39-9e5e-442a-a498-b031f7e6f382" path="/var/lib/kubelet/pods/c5441e39-9e5e-442a-a498-b031f7e6f382/volumes" Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.949769 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:21:57 crc kubenswrapper[4932]: I0321 09:21:57.949863 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:21:58 crc kubenswrapper[4932]: I0321 09:21:58.409988 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:21:58 crc kubenswrapper[4932]: E0321 09:21:58.410471 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:21:58 crc kubenswrapper[4932]: I0321 09:21:58.410810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5a50d4-6331-449e-b615-fb8645e6974c","Type":"ContainerStarted","Data":"433142433d27ceb5886b3bede419eef5ffc5007f08eee4bb660047ee6f79af49"} Mar 21 09:21:58 crc kubenswrapper[4932]: I0321 09:21:58.410837 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5a50d4-6331-449e-b615-fb8645e6974c","Type":"ContainerStarted","Data":"ae0bc1b07725eb5a0ec91acf611815cfa2dd73b7a8ca4de3940e1a7f8d7e4c3a"} Mar 21 09:21:58 crc kubenswrapper[4932]: I0321 09:21:58.412991 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerStarted","Data":"e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651"} Mar 21 09:21:58 crc kubenswrapper[4932]: I0321 09:21:58.413017 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerStarted","Data":"eab57ed00d9c1cec70151569f5572b52012bd26a921dbedfe131f8f027d515a5"} Mar 21 09:21:58 crc kubenswrapper[4932]: I0321 09:21:58.462382 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.462344326 podStartE2EDuration="2.462344326s" podCreationTimestamp="2026-03-21 09:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:21:58.445880957 +0000 UTC m=+1422.041079236" watchObservedRunningTime="2026-03-21 09:21:58.462344326 +0000 UTC m=+1422.057542595" Mar 21 09:21:59 crc kubenswrapper[4932]: I0321 09:21:59.423637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerStarted","Data":"c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511"} Mar 21 09:21:59 crc kubenswrapper[4932]: I0321 09:21:59.423952 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerStarted","Data":"3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361"} Mar 21 09:21:59 crc kubenswrapper[4932]: I0321 09:21:59.912478 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:21:59 crc kubenswrapper[4932]: I0321 09:21:59.939439 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.136267 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568082-h84rh"] Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.137745 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568082-h84rh" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.140872 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.141085 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.143537 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.151885 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568082-h84rh"] Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.226967 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.227294 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.236305 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2cbx\" (UniqueName: \"kubernetes.io/projected/8a82b375-1f38-45f9-baed-1410727d4b6f-kube-api-access-n2cbx\") pod \"auto-csr-approver-29568082-h84rh\" (UID: \"8a82b375-1f38-45f9-baed-1410727d4b6f\") " pod="openshift-infra/auto-csr-approver-29568082-h84rh" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.338913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2cbx\" (UniqueName: \"kubernetes.io/projected/8a82b375-1f38-45f9-baed-1410727d4b6f-kube-api-access-n2cbx\") pod \"auto-csr-approver-29568082-h84rh\" (UID: \"8a82b375-1f38-45f9-baed-1410727d4b6f\") " pod="openshift-infra/auto-csr-approver-29568082-h84rh" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.359067 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2cbx\" (UniqueName: \"kubernetes.io/projected/8a82b375-1f38-45f9-baed-1410727d4b6f-kube-api-access-n2cbx\") pod \"auto-csr-approver-29568082-h84rh\" (UID: \"8a82b375-1f38-45f9-baed-1410727d4b6f\") " pod="openshift-infra/auto-csr-approver-29568082-h84rh" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.458933 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568082-h84rh" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.497179 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.707309 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hcvhr"] Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.709030 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.715701 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.716390 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.733851 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hcvhr"] Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.839593 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-754c945467-b2nqr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.856382 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-config-data\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.856454 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.856573 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbdgb\" (UniqueName: \"kubernetes.io/projected/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-kube-api-access-pbdgb\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.856694 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-scripts\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.958452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbdgb\" (UniqueName: \"kubernetes.io/projected/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-kube-api-access-pbdgb\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.958560 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-scripts\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.958628 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-config-data\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.958675 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.963700 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56b6d686c9-jxr9g"] Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.963995 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" podUID="16a5f9de-7325-4400-a5c6-17b478be66a1" containerName="dnsmasq-dns" containerID="cri-o://ef670a2965d6f38e980f4164920fd54dcf8c01355790c623143d781d5154f29c" gracePeriod=10 Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.968659 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.970257 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-config-data\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.974830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-scripts\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:00 crc kubenswrapper[4932]: I0321 09:22:00.989152 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbdgb\" (UniqueName: \"kubernetes.io/projected/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-kube-api-access-pbdgb\") pod \"nova-cell1-cell-mapping-hcvhr\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.110974 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.182445 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568082-h84rh"] Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.484704 4932 generic.go:334] "Generic (PLEG): container finished" podID="16a5f9de-7325-4400-a5c6-17b478be66a1" containerID="ef670a2965d6f38e980f4164920fd54dcf8c01355790c623143d781d5154f29c" exitCode=0 Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.484782 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" event={"ID":"16a5f9de-7325-4400-a5c6-17b478be66a1","Type":"ContainerDied","Data":"ef670a2965d6f38e980f4164920fd54dcf8c01355790c623143d781d5154f29c"} Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.495262 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerStarted","Data":"8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac"} Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.495456 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.498281 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568082-h84rh" event={"ID":"8a82b375-1f38-45f9-baed-1410727d4b6f","Type":"ContainerStarted","Data":"3ae6b1d84fd165229e3a8299fd9920b62752f11b9d8d87727cf1a46b14c86c23"} Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.549042 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.375092466 podStartE2EDuration="5.549015794s" podCreationTimestamp="2026-03-21 09:21:56 +0000 UTC" firstStartedPulling="2026-03-21 09:21:57.451660914 +0000 UTC m=+1421.046859183" lastFinishedPulling="2026-03-21 09:22:00.625584242 +0000 UTC m=+1424.220782511" observedRunningTime="2026-03-21 09:22:01.537526922 +0000 UTC m=+1425.132725201" watchObservedRunningTime="2026-03-21 09:22:01.549015794 +0000 UTC m=+1425.144214063" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.743431 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.888501 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-svc\") pod \"16a5f9de-7325-4400-a5c6-17b478be66a1\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.888578 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-config\") pod \"16a5f9de-7325-4400-a5c6-17b478be66a1\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.888621 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq4q4\" (UniqueName: \"kubernetes.io/projected/16a5f9de-7325-4400-a5c6-17b478be66a1-kube-api-access-bq4q4\") pod \"16a5f9de-7325-4400-a5c6-17b478be66a1\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.888726 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-sb\") pod \"16a5f9de-7325-4400-a5c6-17b478be66a1\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.888771 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-nb\") pod \"16a5f9de-7325-4400-a5c6-17b478be66a1\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.888917 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-swift-storage-0\") pod \"16a5f9de-7325-4400-a5c6-17b478be66a1\" (UID: \"16a5f9de-7325-4400-a5c6-17b478be66a1\") " Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.895799 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a5f9de-7325-4400-a5c6-17b478be66a1-kube-api-access-bq4q4" (OuterVolumeSpecName: "kube-api-access-bq4q4") pod "16a5f9de-7325-4400-a5c6-17b478be66a1" (UID: "16a5f9de-7325-4400-a5c6-17b478be66a1"). InnerVolumeSpecName "kube-api-access-bq4q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.972225 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16a5f9de-7325-4400-a5c6-17b478be66a1" (UID: "16a5f9de-7325-4400-a5c6-17b478be66a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.993955 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16a5f9de-7325-4400-a5c6-17b478be66a1" (UID: "16a5f9de-7325-4400-a5c6-17b478be66a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.995580 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.995609 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:01 crc kubenswrapper[4932]: I0321 09:22:01.995623 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq4q4\" (UniqueName: \"kubernetes.io/projected/16a5f9de-7325-4400-a5c6-17b478be66a1-kube-api-access-bq4q4\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.044331 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "16a5f9de-7325-4400-a5c6-17b478be66a1" (UID: "16a5f9de-7325-4400-a5c6-17b478be66a1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.050712 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hcvhr"] Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.057074 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16a5f9de-7325-4400-a5c6-17b478be66a1" (UID: "16a5f9de-7325-4400-a5c6-17b478be66a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.070859 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-config" (OuterVolumeSpecName: "config") pod "16a5f9de-7325-4400-a5c6-17b478be66a1" (UID: "16a5f9de-7325-4400-a5c6-17b478be66a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.099066 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-config\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.099173 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.099191 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16a5f9de-7325-4400-a5c6-17b478be66a1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.510971 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hcvhr" event={"ID":"3e696cfc-972e-46a0-bf24-c88ee59fb7e1","Type":"ContainerStarted","Data":"48399e95423e690e191a382c6667f1323d264dbc97c9d649df199e1151a23ca4"} Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.511293 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hcvhr" event={"ID":"3e696cfc-972e-46a0-bf24-c88ee59fb7e1","Type":"ContainerStarted","Data":"9fa48092174427b0c89b8f4013ae1201674bcc57925f88cfe25f3a5da1ca2717"} Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.518181 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" event={"ID":"16a5f9de-7325-4400-a5c6-17b478be66a1","Type":"ContainerDied","Data":"6d35a3fd5d24b95b8e1c0918a7cfd8301929dd6e45eea3b6ea8422ef1c8afc4a"} Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.518203 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56b6d686c9-jxr9g" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.518242 4932 scope.go:117] "RemoveContainer" containerID="ef670a2965d6f38e980f4164920fd54dcf8c01355790c623143d781d5154f29c" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.532898 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hcvhr" podStartSLOduration=2.532880371 podStartE2EDuration="2.532880371s" podCreationTimestamp="2026-03-21 09:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:22:02.532125077 +0000 UTC m=+1426.127323356" watchObservedRunningTime="2026-03-21 09:22:02.532880371 +0000 UTC m=+1426.128078640" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.560540 4932 scope.go:117] "RemoveContainer" containerID="3b538998e426263559dac55cbb0e4cb760981ffb2b7b2c36ebc37d138b6f8535" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.574828 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568082-h84rh" podStartSLOduration=1.716356578 podStartE2EDuration="2.574801682s" podCreationTimestamp="2026-03-21 09:22:00 +0000 UTC" firstStartedPulling="2026-03-21 09:22:01.20492742 +0000 UTC m=+1424.800125689" lastFinishedPulling="2026-03-21 09:22:02.063372524 +0000 UTC m=+1425.658570793" observedRunningTime="2026-03-21 09:22:02.56078418 +0000 UTC m=+1426.155982449" watchObservedRunningTime="2026-03-21 09:22:02.574801682 +0000 UTC m=+1426.169999961" Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.586425 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56b6d686c9-jxr9g"] Mar 21 09:22:02 crc kubenswrapper[4932]: I0321 09:22:02.595130 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56b6d686c9-jxr9g"] Mar 21 09:22:03 crc kubenswrapper[4932]: I0321 09:22:03.530504 4932 generic.go:334] "Generic (PLEG): container finished" podID="8a82b375-1f38-45f9-baed-1410727d4b6f" containerID="e976f03f46fbe39c7266143a7b19dd2ba5d443d838f47bf9fb82a3869f3fd133" exitCode=0 Mar 21 09:22:03 crc kubenswrapper[4932]: I0321 09:22:03.530568 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568082-h84rh" event={"ID":"8a82b375-1f38-45f9-baed-1410727d4b6f","Type":"ContainerDied","Data":"e976f03f46fbe39c7266143a7b19dd2ba5d443d838f47bf9fb82a3869f3fd133"} Mar 21 09:22:03 crc kubenswrapper[4932]: I0321 09:22:03.703939 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:22:03 crc kubenswrapper[4932]: E0321 09:22:03.704173 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:22:03 crc kubenswrapper[4932]: I0321 09:22:03.713329 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a5f9de-7325-4400-a5c6-17b478be66a1" path="/var/lib/kubelet/pods/16a5f9de-7325-4400-a5c6-17b478be66a1/volumes" Mar 21 09:22:05 crc kubenswrapper[4932]: I0321 09:22:05.084992 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568082-h84rh" Mar 21 09:22:05 crc kubenswrapper[4932]: I0321 09:22:05.173489 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2cbx\" (UniqueName: \"kubernetes.io/projected/8a82b375-1f38-45f9-baed-1410727d4b6f-kube-api-access-n2cbx\") pod \"8a82b375-1f38-45f9-baed-1410727d4b6f\" (UID: \"8a82b375-1f38-45f9-baed-1410727d4b6f\") " Mar 21 09:22:05 crc kubenswrapper[4932]: I0321 09:22:05.191008 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a82b375-1f38-45f9-baed-1410727d4b6f-kube-api-access-n2cbx" (OuterVolumeSpecName: "kube-api-access-n2cbx") pod "8a82b375-1f38-45f9-baed-1410727d4b6f" (UID: "8a82b375-1f38-45f9-baed-1410727d4b6f"). InnerVolumeSpecName "kube-api-access-n2cbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:22:05 crc kubenswrapper[4932]: I0321 09:22:05.275572 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2cbx\" (UniqueName: \"kubernetes.io/projected/8a82b375-1f38-45f9-baed-1410727d4b6f-kube-api-access-n2cbx\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:05 crc kubenswrapper[4932]: I0321 09:22:05.554229 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568082-h84rh" event={"ID":"8a82b375-1f38-45f9-baed-1410727d4b6f","Type":"ContainerDied","Data":"3ae6b1d84fd165229e3a8299fd9920b62752f11b9d8d87727cf1a46b14c86c23"} Mar 21 09:22:05 crc kubenswrapper[4932]: I0321 09:22:05.554274 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ae6b1d84fd165229e3a8299fd9920b62752f11b9d8d87727cf1a46b14c86c23" Mar 21 09:22:05 crc kubenswrapper[4932]: I0321 09:22:05.554276 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568082-h84rh" Mar 21 09:22:05 crc kubenswrapper[4932]: E0321 09:22:05.731885 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a82b375_1f38_45f9_baed_1410727d4b6f.slice\": RecentStats: unable to find data in memory cache]" Mar 21 09:22:06 crc kubenswrapper[4932]: I0321 09:22:06.165218 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568076-xwz4c"] Mar 21 09:22:06 crc kubenswrapper[4932]: I0321 09:22:06.184697 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568076-xwz4c"] Mar 21 09:22:06 crc kubenswrapper[4932]: I0321 09:22:06.800278 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 21 09:22:06 crc kubenswrapper[4932]: I0321 09:22:06.800333 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 21 09:22:07 crc kubenswrapper[4932]: I0321 09:22:07.715445 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd38650a-3a05-4fb1-bb79-3641a7f91024" path="/var/lib/kubelet/pods/dd38650a-3a05-4fb1-bb79-3641a7f91024/volumes" Mar 21 09:22:07 crc kubenswrapper[4932]: I0321 09:22:07.815654 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 09:22:07 crc kubenswrapper[4932]: I0321 09:22:07.815654 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 09:22:08 crc kubenswrapper[4932]: I0321 09:22:08.590160 4932 generic.go:334] "Generic (PLEG): container finished" podID="3e696cfc-972e-46a0-bf24-c88ee59fb7e1" containerID="48399e95423e690e191a382c6667f1323d264dbc97c9d649df199e1151a23ca4" exitCode=0 Mar 21 09:22:08 crc kubenswrapper[4932]: I0321 09:22:08.590206 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hcvhr" event={"ID":"3e696cfc-972e-46a0-bf24-c88ee59fb7e1","Type":"ContainerDied","Data":"48399e95423e690e191a382c6667f1323d264dbc97c9d649df199e1151a23ca4"} Mar 21 09:22:09 crc kubenswrapper[4932]: I0321 09:22:09.995542 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.077431 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-combined-ca-bundle\") pod \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.077550 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-scripts\") pod \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.077658 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-config-data\") pod \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.077778 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbdgb\" (UniqueName: \"kubernetes.io/projected/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-kube-api-access-pbdgb\") pod \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\" (UID: \"3e696cfc-972e-46a0-bf24-c88ee59fb7e1\") " Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.085255 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-scripts" (OuterVolumeSpecName: "scripts") pod "3e696cfc-972e-46a0-bf24-c88ee59fb7e1" (UID: "3e696cfc-972e-46a0-bf24-c88ee59fb7e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.085556 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-kube-api-access-pbdgb" (OuterVolumeSpecName: "kube-api-access-pbdgb") pod "3e696cfc-972e-46a0-bf24-c88ee59fb7e1" (UID: "3e696cfc-972e-46a0-bf24-c88ee59fb7e1"). InnerVolumeSpecName "kube-api-access-pbdgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.109864 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-config-data" (OuterVolumeSpecName: "config-data") pod "3e696cfc-972e-46a0-bf24-c88ee59fb7e1" (UID: "3e696cfc-972e-46a0-bf24-c88ee59fb7e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.111814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e696cfc-972e-46a0-bf24-c88ee59fb7e1" (UID: "3e696cfc-972e-46a0-bf24-c88ee59fb7e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.181672 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbdgb\" (UniqueName: \"kubernetes.io/projected/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-kube-api-access-pbdgb\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.181716 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.181730 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.181742 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e696cfc-972e-46a0-bf24-c88ee59fb7e1-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.609628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hcvhr" event={"ID":"3e696cfc-972e-46a0-bf24-c88ee59fb7e1","Type":"ContainerDied","Data":"9fa48092174427b0c89b8f4013ae1201674bcc57925f88cfe25f3a5da1ca2717"} Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.609669 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fa48092174427b0c89b8f4013ae1201674bcc57925f88cfe25f3a5da1ca2717" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.609748 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hcvhr" Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.786218 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.786562 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-log" containerID="cri-o://ae0bc1b07725eb5a0ec91acf611815cfa2dd73b7a8ca4de3940e1a7f8d7e4c3a" gracePeriod=30 Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.786637 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-api" containerID="cri-o://433142433d27ceb5886b3bede419eef5ffc5007f08eee4bb660047ee6f79af49" gracePeriod=30 Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.800851 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.801087 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fdd3e3c2-b658-406c-a87b-c531aa3b5fee" containerName="nova-scheduler-scheduler" containerID="cri-o://8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300" gracePeriod=30 Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.874196 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.874516 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-log" containerID="cri-o://499dd71e04450c3c555959978de3d7f18fd0435b5c5cc2e2e80e0576cd1678cb" gracePeriod=30 Mar 21 09:22:10 crc kubenswrapper[4932]: I0321 09:22:10.874739 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-metadata" containerID="cri-o://a3daa065161374673b53c615c8a08e9ba54c5624298ea3c3a743985e48d1ff11" gracePeriod=30 Mar 21 09:22:11 crc kubenswrapper[4932]: I0321 09:22:11.621894 4932 generic.go:334] "Generic (PLEG): container finished" podID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerID="ae0bc1b07725eb5a0ec91acf611815cfa2dd73b7a8ca4de3940e1a7f8d7e4c3a" exitCode=143 Mar 21 09:22:11 crc kubenswrapper[4932]: I0321 09:22:11.621981 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5a50d4-6331-449e-b615-fb8645e6974c","Type":"ContainerDied","Data":"ae0bc1b07725eb5a0ec91acf611815cfa2dd73b7a8ca4de3940e1a7f8d7e4c3a"} Mar 21 09:22:11 crc kubenswrapper[4932]: I0321 09:22:11.624255 4932 generic.go:334] "Generic (PLEG): container finished" podID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerID="499dd71e04450c3c555959978de3d7f18fd0435b5c5cc2e2e80e0576cd1678cb" exitCode=143 Mar 21 09:22:11 crc kubenswrapper[4932]: I0321 09:22:11.624299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd","Type":"ContainerDied","Data":"499dd71e04450c3c555959978de3d7f18fd0435b5c5cc2e2e80e0576cd1678cb"} Mar 21 09:22:12 crc kubenswrapper[4932]: E0321 09:22:12.454121 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300 is running failed: container process not found" containerID="8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 21 09:22:12 crc kubenswrapper[4932]: E0321 09:22:12.454677 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300 is running failed: container process not found" containerID="8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 21 09:22:12 crc kubenswrapper[4932]: E0321 09:22:12.455006 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300 is running failed: container process not found" containerID="8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 21 09:22:12 crc kubenswrapper[4932]: E0321 09:22:12.455114 4932 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fdd3e3c2-b658-406c-a87b-c531aa3b5fee" containerName="nova-scheduler-scheduler" Mar 21 09:22:12 crc kubenswrapper[4932]: I0321 09:22:12.638661 4932 generic.go:334] "Generic (PLEG): container finished" podID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerID="433142433d27ceb5886b3bede419eef5ffc5007f08eee4bb660047ee6f79af49" exitCode=0 Mar 21 09:22:12 crc kubenswrapper[4932]: I0321 09:22:12.638711 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5a50d4-6331-449e-b615-fb8645e6974c","Type":"ContainerDied","Data":"433142433d27ceb5886b3bede419eef5ffc5007f08eee4bb660047ee6f79af49"} Mar 21 09:22:12 crc kubenswrapper[4932]: I0321 09:22:12.642015 4932 generic.go:334] "Generic (PLEG): container finished" podID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerID="a3daa065161374673b53c615c8a08e9ba54c5624298ea3c3a743985e48d1ff11" exitCode=0 Mar 21 09:22:12 crc kubenswrapper[4932]: I0321 09:22:12.642091 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd","Type":"ContainerDied","Data":"a3daa065161374673b53c615c8a08e9ba54c5624298ea3c3a743985e48d1ff11"} Mar 21 09:22:12 crc kubenswrapper[4932]: I0321 09:22:12.643498 4932 generic.go:334] "Generic (PLEG): container finished" podID="fdd3e3c2-b658-406c-a87b-c531aa3b5fee" containerID="8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300" exitCode=0 Mar 21 09:22:12 crc kubenswrapper[4932]: I0321 09:22:12.643547 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdd3e3c2-b658-406c-a87b-c531aa3b5fee","Type":"ContainerDied","Data":"8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300"} Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.031314 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.143165 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-combined-ca-bundle\") pod \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.143375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-logs\") pod \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.143439 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-nova-metadata-tls-certs\") pod \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.143465 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bhmw\" (UniqueName: \"kubernetes.io/projected/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-kube-api-access-8bhmw\") pod \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.143502 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-config-data\") pod \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\" (UID: \"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.144885 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-logs" (OuterVolumeSpecName: "logs") pod "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" (UID: "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.178692 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-kube-api-access-8bhmw" (OuterVolumeSpecName: "kube-api-access-8bhmw") pod "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" (UID: "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd"). InnerVolumeSpecName "kube-api-access-8bhmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.183564 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-config-data" (OuterVolumeSpecName: "config-data") pod "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" (UID: "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.184082 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" (UID: "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.216023 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.246585 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bhmw\" (UniqueName: \"kubernetes.io/projected/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-kube-api-access-8bhmw\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.246619 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.246631 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.246643 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.247600 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" (UID: "26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.349444 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfgkj\" (UniqueName: \"kubernetes.io/projected/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-kube-api-access-rfgkj\") pod \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.349792 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-combined-ca-bundle\") pod \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.349973 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-config-data\") pod \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\" (UID: \"fdd3e3c2-b658-406c-a87b-c531aa3b5fee\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.350658 4932 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.355528 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-kube-api-access-rfgkj" (OuterVolumeSpecName: "kube-api-access-rfgkj") pod "fdd3e3c2-b658-406c-a87b-c531aa3b5fee" (UID: "fdd3e3c2-b658-406c-a87b-c531aa3b5fee"). InnerVolumeSpecName "kube-api-access-rfgkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.382141 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-config-data" (OuterVolumeSpecName: "config-data") pod "fdd3e3c2-b658-406c-a87b-c531aa3b5fee" (UID: "fdd3e3c2-b658-406c-a87b-c531aa3b5fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.395392 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdd3e3c2-b658-406c-a87b-c531aa3b5fee" (UID: "fdd3e3c2-b658-406c-a87b-c531aa3b5fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.453162 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfgkj\" (UniqueName: \"kubernetes.io/projected/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-kube-api-access-rfgkj\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.453198 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.453209 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd3e3c2-b658-406c-a87b-c531aa3b5fee-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.542222 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.656319 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-internal-tls-certs\") pod \"8b5a50d4-6331-449e-b615-fb8645e6974c\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.656410 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-config-data\") pod \"8b5a50d4-6331-449e-b615-fb8645e6974c\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.656478 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5a50d4-6331-449e-b615-fb8645e6974c-logs\") pod \"8b5a50d4-6331-449e-b615-fb8645e6974c\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.656550 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-public-tls-certs\") pod \"8b5a50d4-6331-449e-b615-fb8645e6974c\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.656849 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-combined-ca-bundle\") pod \"8b5a50d4-6331-449e-b615-fb8645e6974c\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.656903 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkj8r\" (UniqueName: \"kubernetes.io/projected/8b5a50d4-6331-449e-b615-fb8645e6974c-kube-api-access-wkj8r\") pod \"8b5a50d4-6331-449e-b615-fb8645e6974c\" (UID: \"8b5a50d4-6331-449e-b615-fb8645e6974c\") " Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.657326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b5a50d4-6331-449e-b615-fb8645e6974c-logs" (OuterVolumeSpecName: "logs") pod "8b5a50d4-6331-449e-b615-fb8645e6974c" (UID: "8b5a50d4-6331-449e-b615-fb8645e6974c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.657768 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd","Type":"ContainerDied","Data":"edc04088e43b742acca26bff456c8014439783e78326f615e0ccece19ad655b8"} Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.657801 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.657820 4932 scope.go:117] "RemoveContainer" containerID="a3daa065161374673b53c615c8a08e9ba54c5624298ea3c3a743985e48d1ff11" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.658440 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5a50d4-6331-449e-b615-fb8645e6974c-logs\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.660389 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5a50d4-6331-449e-b615-fb8645e6974c-kube-api-access-wkj8r" (OuterVolumeSpecName: "kube-api-access-wkj8r") pod "8b5a50d4-6331-449e-b615-fb8645e6974c" (UID: "8b5a50d4-6331-449e-b615-fb8645e6974c"). InnerVolumeSpecName "kube-api-access-wkj8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.661953 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdd3e3c2-b658-406c-a87b-c531aa3b5fee","Type":"ContainerDied","Data":"3260e8f673146e9795908d33760bcb4997b4cfe335b74fe8b8ddd7b38770e205"} Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.662094 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.665741 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5a50d4-6331-449e-b615-fb8645e6974c","Type":"ContainerDied","Data":"fa2c54fccc53d5595c4f6eb7b2c9e727230d4d817b9e2d9e2c8fe850c38b6e7c"} Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.665802 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.685142 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-config-data" (OuterVolumeSpecName: "config-data") pod "8b5a50d4-6331-449e-b615-fb8645e6974c" (UID: "8b5a50d4-6331-449e-b615-fb8645e6974c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.692633 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b5a50d4-6331-449e-b615-fb8645e6974c" (UID: "8b5a50d4-6331-449e-b615-fb8645e6974c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.702333 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.702704 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.709464 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8b5a50d4-6331-449e-b615-fb8645e6974c" (UID: "8b5a50d4-6331-449e-b615-fb8645e6974c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.711765 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8b5a50d4-6331-449e-b615-fb8645e6974c" (UID: "8b5a50d4-6331-449e-b615-fb8645e6974c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.757515 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.760170 4932 scope.go:117] "RemoveContainer" containerID="499dd71e04450c3c555959978de3d7f18fd0435b5c5cc2e2e80e0576cd1678cb" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.761258 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.761297 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.761309 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.761323 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5a50d4-6331-449e-b615-fb8645e6974c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.761336 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkj8r\" (UniqueName: \"kubernetes.io/projected/8b5a50d4-6331-449e-b615-fb8645e6974c-kube-api-access-wkj8r\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.788851 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.825545 4932 scope.go:117] "RemoveContainer" containerID="8390e9dec33a24d877faedc23730a9b00e0d38571a7d7741e6ef773fd15de300" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.848372 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.848470 4932 scope.go:117] "RemoveContainer" containerID="433142433d27ceb5886b3bede419eef5ffc5007f08eee4bb660047ee6f79af49" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.861118 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.861674 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-log" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.861699 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-log" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.861711 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e696cfc-972e-46a0-bf24-c88ee59fb7e1" containerName="nova-manage" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.861718 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e696cfc-972e-46a0-bf24-c88ee59fb7e1" containerName="nova-manage" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.861741 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd3e3c2-b658-406c-a87b-c531aa3b5fee" containerName="nova-scheduler-scheduler" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.861748 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd3e3c2-b658-406c-a87b-c531aa3b5fee" containerName="nova-scheduler-scheduler" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.861760 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a82b375-1f38-45f9-baed-1410727d4b6f" containerName="oc" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.861766 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a82b375-1f38-45f9-baed-1410727d4b6f" containerName="oc" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.861782 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-log" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.861789 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-log" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.861802 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-metadata" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.861808 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-metadata" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.862308 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a5f9de-7325-4400-a5c6-17b478be66a1" containerName="init" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862325 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a5f9de-7325-4400-a5c6-17b478be66a1" containerName="init" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.862334 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-api" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862341 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-api" Mar 21 09:22:13 crc kubenswrapper[4932]: E0321 09:22:13.862367 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a5f9de-7325-4400-a5c6-17b478be66a1" containerName="dnsmasq-dns" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862373 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a5f9de-7325-4400-a5c6-17b478be66a1" containerName="dnsmasq-dns" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862573 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e696cfc-972e-46a0-bf24-c88ee59fb7e1" containerName="nova-manage" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862586 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-metadata" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862597 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-api" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862611 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd3e3c2-b658-406c-a87b-c531aa3b5fee" containerName="nova-scheduler-scheduler" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862621 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a5f9de-7325-4400-a5c6-17b478be66a1" containerName="dnsmasq-dns" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862633 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" containerName="nova-metadata-log" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862642 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" containerName="nova-api-log" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.862656 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a82b375-1f38-45f9-baed-1410727d4b6f" containerName="oc" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.863436 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.865380 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.869106 4932 scope.go:117] "RemoveContainer" containerID="ae0bc1b07725eb5a0ec91acf611815cfa2dd73b7a8ca4de3940e1a7f8d7e4c3a" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.875990 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.887326 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.898591 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.900957 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.904156 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.904361 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.908497 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.966926 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-config-data\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.967019 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbqj\" (UniqueName: \"kubernetes.io/projected/5e61e833-626e-406f-9a07-e4cbd2711bad-kube-api-access-8dbqj\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.967083 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.967184 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-config-data\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.967308 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.967383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.967492 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e61e833-626e-406f-9a07-e4cbd2711bad-logs\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.967594 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7j5\" (UniqueName: \"kubernetes.io/projected/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-kube-api-access-rn7j5\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:13 crc kubenswrapper[4932]: I0321 09:22:13.992480 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.014226 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.024848 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.026776 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.028937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.029213 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.033090 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.038808 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.069586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-config-data\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.069661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.069694 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.069710 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e61e833-626e-406f-9a07-e4cbd2711bad-logs\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.070189 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7j5\" (UniqueName: \"kubernetes.io/projected/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-kube-api-access-rn7j5\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.070271 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e61e833-626e-406f-9a07-e4cbd2711bad-logs\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.070274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-config-data\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.070382 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dbqj\" (UniqueName: \"kubernetes.io/projected/5e61e833-626e-406f-9a07-e4cbd2711bad-kube-api-access-8dbqj\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.070436 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.073548 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.074185 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-config-data\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.074217 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e61e833-626e-406f-9a07-e4cbd2711bad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.076103 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.076523 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-config-data\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.089251 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7j5\" (UniqueName: \"kubernetes.io/projected/f4d4ddf9-93f9-46c1-a01f-09bb2d170c34-kube-api-access-rn7j5\") pod \"nova-scheduler-0\" (UID: \"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34\") " pod="openstack/nova-scheduler-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.089730 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dbqj\" (UniqueName: \"kubernetes.io/projected/5e61e833-626e-406f-9a07-e4cbd2711bad-kube-api-access-8dbqj\") pod \"nova-metadata-0\" (UID: \"5e61e833-626e-406f-9a07-e4cbd2711bad\") " pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.173265 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.173376 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.173451 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-public-tls-certs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.173683 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666j4\" (UniqueName: \"kubernetes.io/projected/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-kube-api-access-666j4\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.173847 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-logs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.173974 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-config-data\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.187667 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.218710 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.275807 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-config-data\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.276091 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.276147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.276187 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-public-tls-certs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.276249 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-666j4\" (UniqueName: \"kubernetes.io/projected/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-kube-api-access-666j4\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.276300 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-logs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.276646 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-logs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.289440 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.289440 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-public-tls-certs\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.289510 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-config-data\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.289991 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.298687 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-666j4\" (UniqueName: \"kubernetes.io/projected/6b875b34-e9fa-4dc6-9550-0939e59ab0c7-kube-api-access-666j4\") pod \"nova-api-0\" (UID: \"6b875b34-e9fa-4dc6-9550-0939e59ab0c7\") " pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.347236 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 21 09:22:14 crc kubenswrapper[4932]: W0321 09:22:14.853380 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4d4ddf9_93f9_46c1_a01f_09bb2d170c34.slice/crio-929447c9e4a555c9021a8818e363611f9c4d5d8687752dd69295a64bb1e8dc6e WatchSource:0}: Error finding container 929447c9e4a555c9021a8818e363611f9c4d5d8687752dd69295a64bb1e8dc6e: Status 404 returned error can't find the container with id 929447c9e4a555c9021a8818e363611f9c4d5d8687752dd69295a64bb1e8dc6e Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.857362 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.881132 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 21 09:22:14 crc kubenswrapper[4932]: I0321 09:22:14.893408 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 21 09:22:14 crc kubenswrapper[4932]: W0321 09:22:14.905073 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b875b34_e9fa_4dc6_9550_0939e59ab0c7.slice/crio-f8021d1dc3d02abf7c069a1fc6f267b3c0760efca32ad1dcfad706d3ab0b01ae WatchSource:0}: Error finding container f8021d1dc3d02abf7c069a1fc6f267b3c0760efca32ad1dcfad706d3ab0b01ae: Status 404 returned error can't find the container with id f8021d1dc3d02abf7c069a1fc6f267b3c0760efca32ad1dcfad706d3ab0b01ae Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.714000 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd" path="/var/lib/kubelet/pods/26cb3c10-3a23-4ea5-91e1-11dc9d91f3bd/volumes" Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.714926 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5a50d4-6331-449e-b615-fb8645e6974c" path="/var/lib/kubelet/pods/8b5a50d4-6331-449e-b615-fb8645e6974c/volumes" Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.715499 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd3e3c2-b658-406c-a87b-c531aa3b5fee" path="/var/lib/kubelet/pods/fdd3e3c2-b658-406c-a87b-c531aa3b5fee/volumes" Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.841104 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e61e833-626e-406f-9a07-e4cbd2711bad","Type":"ContainerStarted","Data":"18a241fcc7a4aec1b6404ee264ddc02391112093ea98c6ec271eeeedd67ce143"} Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.841161 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e61e833-626e-406f-9a07-e4cbd2711bad","Type":"ContainerStarted","Data":"efe17e8e53da15e745a384b09dcfce9fd2902df4ac58f57277eacac205b07135"} Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.841174 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5e61e833-626e-406f-9a07-e4cbd2711bad","Type":"ContainerStarted","Data":"3de21638a9fd1e7001ec5aad343741a5e8badf49a2c3eb1a23a88143654f2613"} Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.842957 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b875b34-e9fa-4dc6-9550-0939e59ab0c7","Type":"ContainerStarted","Data":"17f7fde1f40fc66a9b6ad71eb1c5966b175ea2a4bf114c1a241cff475cc8b4d6"} Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.842981 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b875b34-e9fa-4dc6-9550-0939e59ab0c7","Type":"ContainerStarted","Data":"11a347dd35fed48c02466d6cdcfd89ffc5fa866a3efb53d4ceb4406f64fe3ddf"} Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.842990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b875b34-e9fa-4dc6-9550-0939e59ab0c7","Type":"ContainerStarted","Data":"f8021d1dc3d02abf7c069a1fc6f267b3c0760efca32ad1dcfad706d3ab0b01ae"} Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.844703 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34","Type":"ContainerStarted","Data":"ddd50c809e6f6ed80c46c8e97de689b76d0678da42023ee48644b820991c987f"} Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.844741 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f4d4ddf9-93f9-46c1-a01f-09bb2d170c34","Type":"ContainerStarted","Data":"929447c9e4a555c9021a8818e363611f9c4d5d8687752dd69295a64bb1e8dc6e"} Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.865792 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.865769708 podStartE2EDuration="2.865769708s" podCreationTimestamp="2026-03-21 09:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:22:15.861558235 +0000 UTC m=+1439.456756504" watchObservedRunningTime="2026-03-21 09:22:15.865769708 +0000 UTC m=+1439.460967977" Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.897821 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.897792697 podStartE2EDuration="2.897792697s" podCreationTimestamp="2026-03-21 09:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:22:15.885961744 +0000 UTC m=+1439.481160013" watchObservedRunningTime="2026-03-21 09:22:15.897792697 +0000 UTC m=+1439.492990966" Mar 21 09:22:15 crc kubenswrapper[4932]: I0321 09:22:15.925936 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.925915063 podStartE2EDuration="2.925915063s" podCreationTimestamp="2026-03-21 09:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:22:15.91722292 +0000 UTC m=+1439.512421199" watchObservedRunningTime="2026-03-21 09:22:15.925915063 +0000 UTC m=+1439.521113332" Mar 21 09:22:16 crc kubenswrapper[4932]: I0321 09:22:16.703054 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:22:16 crc kubenswrapper[4932]: E0321 09:22:16.703523 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:22:19 crc kubenswrapper[4932]: I0321 09:22:19.188181 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 21 09:22:24 crc kubenswrapper[4932]: I0321 09:22:24.188096 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 21 09:22:24 crc kubenswrapper[4932]: I0321 09:22:24.219086 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 21 09:22:24 crc kubenswrapper[4932]: I0321 09:22:24.219138 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 21 09:22:24 crc kubenswrapper[4932]: I0321 09:22:24.219787 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 21 09:22:24 crc kubenswrapper[4932]: I0321 09:22:24.348668 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 21 09:22:24 crc kubenswrapper[4932]: I0321 09:22:24.348738 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 21 09:22:24 crc kubenswrapper[4932]: I0321 09:22:24.960733 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 21 09:22:25 crc kubenswrapper[4932]: I0321 09:22:25.238776 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5e61e833-626e-406f-9a07-e4cbd2711bad" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 09:22:25 crc kubenswrapper[4932]: I0321 09:22:25.239375 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5e61e833-626e-406f-9a07-e4cbd2711bad" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 09:22:25 crc kubenswrapper[4932]: I0321 09:22:25.365488 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6b875b34-e9fa-4dc6-9550-0939e59ab0c7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 09:22:25 crc kubenswrapper[4932]: I0321 09:22:25.365488 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6b875b34-e9fa-4dc6-9550-0939e59ab0c7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 09:22:26 crc kubenswrapper[4932]: I0321 09:22:26.830279 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 21 09:22:27 crc kubenswrapper[4932]: I0321 09:22:27.711250 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:22:27 crc kubenswrapper[4932]: E0321 09:22:27.711617 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:22:28 crc kubenswrapper[4932]: I0321 09:22:28.277462 4932 scope.go:117] "RemoveContainer" containerID="de9e72d6de64394010265bdfa3ab7b40ce8ca5a454c4eb6ab103fe847cf44d3c" Mar 21 09:22:28 crc kubenswrapper[4932]: I0321 09:22:28.300312 4932 scope.go:117] "RemoveContainer" containerID="6cd5a22856c0c7b4d3c4fc4a8cdbf1632916f16838179690db0af8a3fb36f136" Mar 21 09:22:28 crc kubenswrapper[4932]: I0321 09:22:28.365548 4932 scope.go:117] "RemoveContainer" containerID="eab492b8d29a0b17c0073fdd94a5ab7348697b6d95690f54217c869bcf8d1ea4" Mar 21 09:22:28 crc kubenswrapper[4932]: I0321 09:22:28.395742 4932 scope.go:117] "RemoveContainer" containerID="727018ad5556ba0a54cc529ca9821ed78487d923624d2468de59165b690b293e" Mar 21 09:22:30 crc kubenswrapper[4932]: I0321 09:22:30.226731 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:22:30 crc kubenswrapper[4932]: I0321 09:22:30.227062 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:22:30 crc kubenswrapper[4932]: I0321 09:22:30.227125 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:22:30 crc kubenswrapper[4932]: I0321 09:22:30.227967 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d3170911019ffc2d29f28c120dc321f5dd9686c4e0cf0c0353759cae75828b5"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:22:30 crc kubenswrapper[4932]: I0321 09:22:30.228022 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://2d3170911019ffc2d29f28c120dc321f5dd9686c4e0cf0c0353759cae75828b5" gracePeriod=600 Mar 21 09:22:30 crc kubenswrapper[4932]: I0321 09:22:30.475645 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:22:30 crc kubenswrapper[4932]: I0321 09:22:30.476170 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="765d61b5-f144-4784-8c7d-ac497a6b6cba" containerName="kube-state-metrics" containerID="cri-o://535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9" gracePeriod=30 Mar 21 09:22:30 crc kubenswrapper[4932]: I0321 09:22:30.702883 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:22:30 crc kubenswrapper[4932]: E0321 09:22:30.703178 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.008860 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.009424 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="2d3170911019ffc2d29f28c120dc321f5dd9686c4e0cf0c0353759cae75828b5" exitCode=0 Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.009522 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"2d3170911019ffc2d29f28c120dc321f5dd9686c4e0cf0c0353759cae75828b5"} Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.009565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966"} Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.009602 4932 scope.go:117] "RemoveContainer" containerID="62297762b526104c5e6a38e2d50dd142e250cf66f3aafdbfb83ac66a7c17e885" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.011785 4932 generic.go:334] "Generic (PLEG): container finished" podID="765d61b5-f144-4784-8c7d-ac497a6b6cba" containerID="535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9" exitCode=2 Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.011828 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"765d61b5-f144-4784-8c7d-ac497a6b6cba","Type":"ContainerDied","Data":"535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9"} Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.011905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"765d61b5-f144-4784-8c7d-ac497a6b6cba","Type":"ContainerDied","Data":"ece26c60f4bad48cbeaa916023b6fb184e96adba19ab33e128a42f1052097709"} Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.013699 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.070310 4932 scope.go:117] "RemoveContainer" containerID="535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.095097 4932 scope.go:117] "RemoveContainer" containerID="535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9" Mar 21 09:22:31 crc kubenswrapper[4932]: E0321 09:22:31.095609 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9\": container with ID starting with 535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9 not found: ID does not exist" containerID="535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.095660 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9"} err="failed to get container status \"535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9\": rpc error: code = NotFound desc = could not find container \"535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9\": container with ID starting with 535c03306c208ad056d0db6b6f0a9ddbdc5c1df257b2773d86802d2e32b8d8f9 not found: ID does not exist" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.130356 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4v62\" (UniqueName: \"kubernetes.io/projected/765d61b5-f144-4784-8c7d-ac497a6b6cba-kube-api-access-r4v62\") pod \"765d61b5-f144-4784-8c7d-ac497a6b6cba\" (UID: \"765d61b5-f144-4784-8c7d-ac497a6b6cba\") " Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.136776 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765d61b5-f144-4784-8c7d-ac497a6b6cba-kube-api-access-r4v62" (OuterVolumeSpecName: "kube-api-access-r4v62") pod "765d61b5-f144-4784-8c7d-ac497a6b6cba" (UID: "765d61b5-f144-4784-8c7d-ac497a6b6cba"). InnerVolumeSpecName "kube-api-access-r4v62". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.233397 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4v62\" (UniqueName: \"kubernetes.io/projected/765d61b5-f144-4784-8c7d-ac497a6b6cba-kube-api-access-r4v62\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.349738 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.359058 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.386316 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:22:31 crc kubenswrapper[4932]: E0321 09:22:31.386850 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765d61b5-f144-4784-8c7d-ac497a6b6cba" containerName="kube-state-metrics" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.386872 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="765d61b5-f144-4784-8c7d-ac497a6b6cba" containerName="kube-state-metrics" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.387122 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="765d61b5-f144-4784-8c7d-ac497a6b6cba" containerName="kube-state-metrics" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.387909 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.390766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.391169 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.415597 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.437526 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlm8\" (UniqueName: \"kubernetes.io/projected/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-api-access-6xlm8\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.437936 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.438110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.438209 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.540454 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlm8\" (UniqueName: \"kubernetes.io/projected/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-api-access-6xlm8\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.540796 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.540945 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.541092 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.545542 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.546083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.547126 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b0f78b-06df-4bfa-8477-b291b7787e8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.558769 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlm8\" (UniqueName: \"kubernetes.io/projected/c8b0f78b-06df-4bfa-8477-b291b7787e8d-kube-api-access-6xlm8\") pod \"kube-state-metrics-0\" (UID: \"c8b0f78b-06df-4bfa-8477-b291b7787e8d\") " pod="openstack/kube-state-metrics-0" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.715578 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765d61b5-f144-4784-8c7d-ac497a6b6cba" path="/var/lib/kubelet/pods/765d61b5-f144-4784-8c7d-ac497a6b6cba/volumes" Mar 21 09:22:31 crc kubenswrapper[4932]: I0321 09:22:31.725393 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 21 09:22:32 crc kubenswrapper[4932]: W0321 09:22:32.178677 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8b0f78b_06df_4bfa_8477_b291b7787e8d.slice/crio-db48ebc5fa871c7b308b8477dc64d2ca3c5307c67184b446df8cbc9678ba91a3 WatchSource:0}: Error finding container db48ebc5fa871c7b308b8477dc64d2ca3c5307c67184b446df8cbc9678ba91a3: Status 404 returned error can't find the container with id db48ebc5fa871c7b308b8477dc64d2ca3c5307c67184b446df8cbc9678ba91a3 Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.184261 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.219323 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.219427 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.305124 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.306771 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="ceilometer-central-agent" containerID="cri-o://e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651" gracePeriod=30 Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.306846 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="proxy-httpd" containerID="cri-o://8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac" gracePeriod=30 Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.306929 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="ceilometer-notification-agent" containerID="cri-o://3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361" gracePeriod=30 Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.307263 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="sg-core" containerID="cri-o://c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511" gracePeriod=30 Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.347911 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 21 09:22:32 crc kubenswrapper[4932]: I0321 09:22:32.347959 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.041048 4932 generic.go:334] "Generic (PLEG): container finished" podID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerID="8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac" exitCode=0 Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.041408 4932 generic.go:334] "Generic (PLEG): container finished" podID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerID="c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511" exitCode=2 Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.041420 4932 generic.go:334] "Generic (PLEG): container finished" podID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerID="e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651" exitCode=0 Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.041138 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerDied","Data":"8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac"} Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.041497 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerDied","Data":"c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511"} Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.041805 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerDied","Data":"e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651"} Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.043075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c8b0f78b-06df-4bfa-8477-b291b7787e8d","Type":"ContainerStarted","Data":"bccc1d086090e1637916e78a4b2ae7fd677ce34e83c40263e6fbbeafa5d72d40"} Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.043102 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c8b0f78b-06df-4bfa-8477-b291b7787e8d","Type":"ContainerStarted","Data":"db48ebc5fa871c7b308b8477dc64d2ca3c5307c67184b446df8cbc9678ba91a3"} Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.043271 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 21 09:22:33 crc kubenswrapper[4932]: I0321 09:22:33.065060 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.581434352 podStartE2EDuration="2.065036872s" podCreationTimestamp="2026-03-21 09:22:31 +0000 UTC" firstStartedPulling="2026-03-21 09:22:32.181998923 +0000 UTC m=+1455.777197192" lastFinishedPulling="2026-03-21 09:22:32.665601443 +0000 UTC m=+1456.260799712" observedRunningTime="2026-03-21 09:22:33.058411253 +0000 UTC m=+1456.653609522" watchObservedRunningTime="2026-03-21 09:22:33.065036872 +0000 UTC m=+1456.660235141" Mar 21 09:22:34 crc kubenswrapper[4932]: I0321 09:22:34.230919 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 21 09:22:34 crc kubenswrapper[4932]: I0321 09:22:34.244662 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 21 09:22:34 crc kubenswrapper[4932]: I0321 09:22:34.246480 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 21 09:22:34 crc kubenswrapper[4932]: I0321 09:22:34.359061 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 21 09:22:34 crc kubenswrapper[4932]: I0321 09:22:34.362169 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 21 09:22:34 crc kubenswrapper[4932]: I0321 09:22:34.371025 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 21 09:22:35 crc kubenswrapper[4932]: I0321 09:22:35.068405 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 21 09:22:35 crc kubenswrapper[4932]: I0321 09:22:35.072970 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.893759 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.966761 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-log-httpd\") pod \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.966887 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-config-data\") pod \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.966930 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-sg-core-conf-yaml\") pod \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.966987 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-scripts\") pod \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.967013 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p5t5\" (UniqueName: \"kubernetes.io/projected/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-kube-api-access-6p5t5\") pod \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.967047 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-run-httpd\") pod \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.967109 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-combined-ca-bundle\") pod \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\" (UID: \"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6\") " Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.967722 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" (UID: "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.971238 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" (UID: "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.975381 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-scripts" (OuterVolumeSpecName: "scripts") pod "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" (UID: "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:36 crc kubenswrapper[4932]: I0321 09:22:36.977912 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-kube-api-access-6p5t5" (OuterVolumeSpecName: "kube-api-access-6p5t5") pod "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" (UID: "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6"). InnerVolumeSpecName "kube-api-access-6p5t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.004960 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" (UID: "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.072148 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.072183 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.072202 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.072215 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p5t5\" (UniqueName: \"kubernetes.io/projected/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-kube-api-access-6p5t5\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.072228 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.072377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-config-data" (OuterVolumeSpecName: "config-data") pod "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" (UID: "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.075118 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" (UID: "71b66984-4a99-4eb0-bf02-a9eb0c99f6b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.094783 4932 generic.go:334] "Generic (PLEG): container finished" podID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerID="3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361" exitCode=0 Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.095510 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerDied","Data":"3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361"} Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.095574 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71b66984-4a99-4eb0-bf02-a9eb0c99f6b6","Type":"ContainerDied","Data":"eab57ed00d9c1cec70151569f5572b52012bd26a921dbedfe131f8f027d515a5"} Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.095604 4932 scope.go:117] "RemoveContainer" containerID="8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.095830 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.118686 4932 scope.go:117] "RemoveContainer" containerID="c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.139026 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.149472 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.154879 4932 scope.go:117] "RemoveContainer" containerID="3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.171550 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:22:37 crc kubenswrapper[4932]: E0321 09:22:37.172102 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="ceilometer-notification-agent" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.172117 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="ceilometer-notification-agent" Mar 21 09:22:37 crc kubenswrapper[4932]: E0321 09:22:37.172131 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="sg-core" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.172138 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="sg-core" Mar 21 09:22:37 crc kubenswrapper[4932]: E0321 09:22:37.172147 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="ceilometer-central-agent" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.172153 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="ceilometer-central-agent" Mar 21 09:22:37 crc kubenswrapper[4932]: E0321 09:22:37.172183 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="proxy-httpd" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.172188 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="proxy-httpd" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.172381 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="ceilometer-central-agent" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.172397 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="ceilometer-notification-agent" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.172411 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="proxy-httpd" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.172421 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" containerName="sg-core" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.174350 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.174580 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.174622 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.178378 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.178541 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.178573 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.187273 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.196484 4932 scope.go:117] "RemoveContainer" containerID="e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.215912 4932 scope.go:117] "RemoveContainer" containerID="8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac" Mar 21 09:22:37 crc kubenswrapper[4932]: E0321 09:22:37.216556 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac\": container with ID starting with 8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac not found: ID does not exist" containerID="8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.216634 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac"} err="failed to get container status \"8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac\": rpc error: code = NotFound desc = could not find container \"8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac\": container with ID starting with 8e93102d4e32e1f198de83b7df15f49be26bbca172f7fe70f2a02d028f5139ac not found: ID does not exist" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.216691 4932 scope.go:117] "RemoveContainer" containerID="c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511" Mar 21 09:22:37 crc kubenswrapper[4932]: E0321 09:22:37.217137 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511\": container with ID starting with c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511 not found: ID does not exist" containerID="c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.217173 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511"} err="failed to get container status \"c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511\": rpc error: code = NotFound desc = could not find container \"c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511\": container with ID starting with c2a7e1c440739f059c358589af634b9019ad5db8af2c5937f7770e848c010511 not found: ID does not exist" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.217197 4932 scope.go:117] "RemoveContainer" containerID="3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361" Mar 21 09:22:37 crc kubenswrapper[4932]: E0321 09:22:37.217458 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361\": container with ID starting with 3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361 not found: ID does not exist" containerID="3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.217513 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361"} err="failed to get container status \"3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361\": rpc error: code = NotFound desc = could not find container \"3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361\": container with ID starting with 3bfa01f11847c20bd429afc027d29697a8081fc82839d7162462c2ecfefd5361 not found: ID does not exist" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.217531 4932 scope.go:117] "RemoveContainer" containerID="e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651" Mar 21 09:22:37 crc kubenswrapper[4932]: E0321 09:22:37.217848 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651\": container with ID starting with e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651 not found: ID does not exist" containerID="e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.217890 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651"} err="failed to get container status \"e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651\": rpc error: code = NotFound desc = could not find container \"e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651\": container with ID starting with e3cc3b6ad2392b9f15e32ebe1635293d7a6da965fe20beb138feeb4910373651 not found: ID does not exist" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.276575 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-scripts\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.276620 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.276642 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llllf\" (UniqueName: \"kubernetes.io/projected/fb6c0c4e-9d96-4c88-9db6-245c190489fa-kube-api-access-llllf\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.276857 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-config-data\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.276989 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.277073 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb6c0c4e-9d96-4c88-9db6-245c190489fa-log-httpd\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.277289 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.277446 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb6c0c4e-9d96-4c88-9db6-245c190489fa-run-httpd\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.378847 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-scripts\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.378896 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.378920 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llllf\" (UniqueName: \"kubernetes.io/projected/fb6c0c4e-9d96-4c88-9db6-245c190489fa-kube-api-access-llllf\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.378977 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-config-data\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.379036 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.379068 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb6c0c4e-9d96-4c88-9db6-245c190489fa-log-httpd\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.379142 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.379192 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb6c0c4e-9d96-4c88-9db6-245c190489fa-run-httpd\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.379789 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb6c0c4e-9d96-4c88-9db6-245c190489fa-run-httpd\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.380041 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb6c0c4e-9d96-4c88-9db6-245c190489fa-log-httpd\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.385167 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.385701 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.387801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-scripts\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.388317 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.389286 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c0c4e-9d96-4c88-9db6-245c190489fa-config-data\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.403472 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llllf\" (UniqueName: \"kubernetes.io/projected/fb6c0c4e-9d96-4c88-9db6-245c190489fa-kube-api-access-llllf\") pod \"ceilometer-0\" (UID: \"fb6c0c4e-9d96-4c88-9db6-245c190489fa\") " pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.497037 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.715653 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b66984-4a99-4eb0-bf02-a9eb0c99f6b6" path="/var/lib/kubelet/pods/71b66984-4a99-4eb0-bf02-a9eb0c99f6b6/volumes" Mar 21 09:22:37 crc kubenswrapper[4932]: I0321 09:22:37.984354 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 21 09:22:38 crc kubenswrapper[4932]: I0321 09:22:38.110684 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb6c0c4e-9d96-4c88-9db6-245c190489fa","Type":"ContainerStarted","Data":"52fed38b50f6f2c86c098e729c5a10efeb1802be4124d4f6d0784728526c16f5"} Mar 21 09:22:38 crc kubenswrapper[4932]: I0321 09:22:38.703717 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:22:38 crc kubenswrapper[4932]: E0321 09:22:38.704306 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:22:39 crc kubenswrapper[4932]: I0321 09:22:39.126173 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb6c0c4e-9d96-4c88-9db6-245c190489fa","Type":"ContainerStarted","Data":"8e289eb8552d03d28b0584a53d949c86b93db7c90f947af073b3fec6abd828fe"} Mar 21 09:22:39 crc kubenswrapper[4932]: I0321 09:22:39.126256 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb6c0c4e-9d96-4c88-9db6-245c190489fa","Type":"ContainerStarted","Data":"1d6276b34b56d774447c8958bd7e6f5327077a2d5a84e2df797c1246e5bf43c2"} Mar 21 09:22:40 crc kubenswrapper[4932]: I0321 09:22:40.152164 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb6c0c4e-9d96-4c88-9db6-245c190489fa","Type":"ContainerStarted","Data":"49dec6873ed5ffa664b342cf68a781cfc65452e22318c3133f585fe4eb465098"} Mar 21 09:22:41 crc kubenswrapper[4932]: I0321 09:22:41.163372 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb6c0c4e-9d96-4c88-9db6-245c190489fa","Type":"ContainerStarted","Data":"ef3c88dd80804ec0ecb8d92d284f4be16f9e9a3308c80d82b7b26e6f678b3cf3"} Mar 21 09:22:41 crc kubenswrapper[4932]: I0321 09:22:41.163603 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 21 09:22:41 crc kubenswrapper[4932]: I0321 09:22:41.192336 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.299691028 podStartE2EDuration="4.192314668s" podCreationTimestamp="2026-03-21 09:22:37 +0000 UTC" firstStartedPulling="2026-03-21 09:22:37.982879264 +0000 UTC m=+1461.578077563" lastFinishedPulling="2026-03-21 09:22:40.875502934 +0000 UTC m=+1464.470701203" observedRunningTime="2026-03-21 09:22:41.184435799 +0000 UTC m=+1464.779634078" watchObservedRunningTime="2026-03-21 09:22:41.192314668 +0000 UTC m=+1464.787512947" Mar 21 09:22:41 crc kubenswrapper[4932]: I0321 09:22:41.733703 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 21 09:22:44 crc kubenswrapper[4932]: I0321 09:22:44.703120 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:22:44 crc kubenswrapper[4932]: E0321 09:22:44.703803 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:22:50 crc kubenswrapper[4932]: I0321 09:22:50.703207 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:22:50 crc kubenswrapper[4932]: E0321 09:22:50.703849 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:22:57 crc kubenswrapper[4932]: I0321 09:22:57.709260 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:22:57 crc kubenswrapper[4932]: E0321 09:22:57.710033 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:23:03 crc kubenswrapper[4932]: I0321 09:23:03.703879 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:23:03 crc kubenswrapper[4932]: E0321 09:23:03.705172 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:23:07 crc kubenswrapper[4932]: I0321 09:23:07.512462 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 21 09:23:12 crc kubenswrapper[4932]: I0321 09:23:12.703703 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:23:13 crc kubenswrapper[4932]: I0321 09:23:13.471593 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1"} Mar 21 09:23:14 crc kubenswrapper[4932]: I0321 09:23:14.702760 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:23:14 crc kubenswrapper[4932]: E0321 09:23:14.703142 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:23:17 crc kubenswrapper[4932]: I0321 09:23:17.741302 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:23:17 crc kubenswrapper[4932]: I0321 09:23:17.741940 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:23:21 crc kubenswrapper[4932]: I0321 09:23:21.555953 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" exitCode=1 Mar 21 09:23:21 crc kubenswrapper[4932]: I0321 09:23:21.556000 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1"} Mar 21 09:23:21 crc kubenswrapper[4932]: I0321 09:23:21.556587 4932 scope.go:117] "RemoveContainer" containerID="bd32d3d77c8495df5a02e8d639152560c12a09bb52e292687d6b04f1526d0f78" Mar 21 09:23:21 crc kubenswrapper[4932]: I0321 09:23:21.557963 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:23:21 crc kubenswrapper[4932]: E0321 09:23:21.558749 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:23:27 crc kubenswrapper[4932]: I0321 09:23:27.740969 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:23:27 crc kubenswrapper[4932]: I0321 09:23:27.741965 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:23:27 crc kubenswrapper[4932]: I0321 09:23:27.742961 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:23:27 crc kubenswrapper[4932]: E0321 09:23:27.743183 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:23:29 crc kubenswrapper[4932]: I0321 09:23:29.703526 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:23:30 crc kubenswrapper[4932]: I0321 09:23:30.652467 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5"} Mar 21 09:23:37 crc kubenswrapper[4932]: I0321 09:23:37.948675 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:23:37 crc kubenswrapper[4932]: I0321 09:23:37.949195 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:23:38 crc kubenswrapper[4932]: I0321 09:23:38.702908 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:23:38 crc kubenswrapper[4932]: E0321 09:23:38.703288 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:23:38 crc kubenswrapper[4932]: I0321 09:23:38.736095 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" exitCode=1 Mar 21 09:23:38 crc kubenswrapper[4932]: I0321 09:23:38.736149 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5"} Mar 21 09:23:38 crc kubenswrapper[4932]: I0321 09:23:38.736191 4932 scope.go:117] "RemoveContainer" containerID="3fb140719984f2396d394cc259e4b915dd63f704ef4bf6c8d7129d5cdb724520" Mar 21 09:23:38 crc kubenswrapper[4932]: I0321 09:23:38.737191 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:23:38 crc kubenswrapper[4932]: E0321 09:23:38.737548 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:23:47 crc kubenswrapper[4932]: I0321 09:23:47.948509 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:23:47 crc kubenswrapper[4932]: I0321 09:23:47.949116 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:23:47 crc kubenswrapper[4932]: I0321 09:23:47.950222 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:23:47 crc kubenswrapper[4932]: E0321 09:23:47.950561 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:23:51 crc kubenswrapper[4932]: I0321 09:23:51.703339 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:23:51 crc kubenswrapper[4932]: E0321 09:23:51.703998 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:23:59 crc kubenswrapper[4932]: I0321 09:23:59.703955 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:23:59 crc kubenswrapper[4932]: E0321 09:23:59.705271 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.144339 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568084-gfk8p"] Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.145841 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568084-gfk8p" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.148291 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.148291 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.148295 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.154000 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568084-gfk8p"] Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.287424 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6qlg\" (UniqueName: \"kubernetes.io/projected/b5cbc8bb-0dc6-4b75-936d-661f39208aa0-kube-api-access-g6qlg\") pod \"auto-csr-approver-29568084-gfk8p\" (UID: \"b5cbc8bb-0dc6-4b75-936d-661f39208aa0\") " pod="openshift-infra/auto-csr-approver-29568084-gfk8p" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.389878 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6qlg\" (UniqueName: \"kubernetes.io/projected/b5cbc8bb-0dc6-4b75-936d-661f39208aa0-kube-api-access-g6qlg\") pod \"auto-csr-approver-29568084-gfk8p\" (UID: \"b5cbc8bb-0dc6-4b75-936d-661f39208aa0\") " pod="openshift-infra/auto-csr-approver-29568084-gfk8p" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.411331 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6qlg\" (UniqueName: \"kubernetes.io/projected/b5cbc8bb-0dc6-4b75-936d-661f39208aa0-kube-api-access-g6qlg\") pod \"auto-csr-approver-29568084-gfk8p\" (UID: \"b5cbc8bb-0dc6-4b75-936d-661f39208aa0\") " pod="openshift-infra/auto-csr-approver-29568084-gfk8p" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.469028 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568084-gfk8p" Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.928759 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568084-gfk8p"] Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.934944 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:24:00 crc kubenswrapper[4932]: I0321 09:24:00.948689 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568084-gfk8p" event={"ID":"b5cbc8bb-0dc6-4b75-936d-661f39208aa0","Type":"ContainerStarted","Data":"bdc126abc25968e8870fa2a18fcd1ec61702b62ae457721373ac51aeb7f74000"} Mar 21 09:24:02 crc kubenswrapper[4932]: I0321 09:24:02.976394 4932 generic.go:334] "Generic (PLEG): container finished" podID="b5cbc8bb-0dc6-4b75-936d-661f39208aa0" containerID="e6af1ee035b663c6af79aa9008bdd99565d9c49e660ddabcca66d2858c97c4c1" exitCode=0 Mar 21 09:24:02 crc kubenswrapper[4932]: I0321 09:24:02.976483 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568084-gfk8p" event={"ID":"b5cbc8bb-0dc6-4b75-936d-661f39208aa0","Type":"ContainerDied","Data":"e6af1ee035b663c6af79aa9008bdd99565d9c49e660ddabcca66d2858c97c4c1"} Mar 21 09:24:04 crc kubenswrapper[4932]: I0321 09:24:04.321881 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568084-gfk8p" Mar 21 09:24:04 crc kubenswrapper[4932]: I0321 09:24:04.475246 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6qlg\" (UniqueName: \"kubernetes.io/projected/b5cbc8bb-0dc6-4b75-936d-661f39208aa0-kube-api-access-g6qlg\") pod \"b5cbc8bb-0dc6-4b75-936d-661f39208aa0\" (UID: \"b5cbc8bb-0dc6-4b75-936d-661f39208aa0\") " Mar 21 09:24:04 crc kubenswrapper[4932]: I0321 09:24:04.481585 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5cbc8bb-0dc6-4b75-936d-661f39208aa0-kube-api-access-g6qlg" (OuterVolumeSpecName: "kube-api-access-g6qlg") pod "b5cbc8bb-0dc6-4b75-936d-661f39208aa0" (UID: "b5cbc8bb-0dc6-4b75-936d-661f39208aa0"). InnerVolumeSpecName "kube-api-access-g6qlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:24:04 crc kubenswrapper[4932]: I0321 09:24:04.577872 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6qlg\" (UniqueName: \"kubernetes.io/projected/b5cbc8bb-0dc6-4b75-936d-661f39208aa0-kube-api-access-g6qlg\") on node \"crc\" DevicePath \"\"" Mar 21 09:24:04 crc kubenswrapper[4932]: I0321 09:24:04.702936 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:24:04 crc kubenswrapper[4932]: E0321 09:24:04.703557 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:24:04 crc kubenswrapper[4932]: I0321 09:24:04.999873 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568084-gfk8p" event={"ID":"b5cbc8bb-0dc6-4b75-936d-661f39208aa0","Type":"ContainerDied","Data":"bdc126abc25968e8870fa2a18fcd1ec61702b62ae457721373ac51aeb7f74000"} Mar 21 09:24:04 crc kubenswrapper[4932]: I0321 09:24:04.999917 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc126abc25968e8870fa2a18fcd1ec61702b62ae457721373ac51aeb7f74000" Mar 21 09:24:04 crc kubenswrapper[4932]: I0321 09:24:04.999974 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568084-gfk8p" Mar 21 09:24:05 crc kubenswrapper[4932]: I0321 09:24:05.402435 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568078-9kxgq"] Mar 21 09:24:05 crc kubenswrapper[4932]: I0321 09:24:05.415459 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568078-9kxgq"] Mar 21 09:24:05 crc kubenswrapper[4932]: I0321 09:24:05.714111 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27dadb54-a5a3-4fab-978e-0453dc63539f" path="/var/lib/kubelet/pods/27dadb54-a5a3-4fab-978e-0453dc63539f/volumes" Mar 21 09:24:13 crc kubenswrapper[4932]: I0321 09:24:13.703309 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:24:13 crc kubenswrapper[4932]: E0321 09:24:13.704663 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.848545 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-prt49"] Mar 21 09:24:14 crc kubenswrapper[4932]: E0321 09:24:14.849573 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5cbc8bb-0dc6-4b75-936d-661f39208aa0" containerName="oc" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.849594 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5cbc8bb-0dc6-4b75-936d-661f39208aa0" containerName="oc" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.849867 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5cbc8bb-0dc6-4b75-936d-661f39208aa0" containerName="oc" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.852513 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.860265 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-prt49"] Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.883409 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-catalog-content\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.883526 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cph7p\" (UniqueName: \"kubernetes.io/projected/962ed459-fdfb-4a0a-844a-c0f609ebb540-kube-api-access-cph7p\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.883678 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-utilities\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.985397 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-utilities\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.985840 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-utilities\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.986002 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-catalog-content\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.986069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cph7p\" (UniqueName: \"kubernetes.io/projected/962ed459-fdfb-4a0a-844a-c0f609ebb540-kube-api-access-cph7p\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:14 crc kubenswrapper[4932]: I0321 09:24:14.986394 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-catalog-content\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:15 crc kubenswrapper[4932]: I0321 09:24:15.008895 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cph7p\" (UniqueName: \"kubernetes.io/projected/962ed459-fdfb-4a0a-844a-c0f609ebb540-kube-api-access-cph7p\") pod \"redhat-operators-prt49\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:15 crc kubenswrapper[4932]: I0321 09:24:15.179434 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:15 crc kubenswrapper[4932]: I0321 09:24:15.690942 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-prt49"] Mar 21 09:24:15 crc kubenswrapper[4932]: I0321 09:24:15.706698 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:24:15 crc kubenswrapper[4932]: E0321 09:24:15.706985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:24:16 crc kubenswrapper[4932]: I0321 09:24:16.430389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prt49" event={"ID":"962ed459-fdfb-4a0a-844a-c0f609ebb540","Type":"ContainerStarted","Data":"f03529b7922d9bbdae3ba4774b235facf7121d5198714828b189c4adde5fe645"} Mar 21 09:24:17 crc kubenswrapper[4932]: I0321 09:24:17.443759 4932 generic.go:334] "Generic (PLEG): container finished" podID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerID="64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca" exitCode=0 Mar 21 09:24:17 crc kubenswrapper[4932]: I0321 09:24:17.443902 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prt49" event={"ID":"962ed459-fdfb-4a0a-844a-c0f609ebb540","Type":"ContainerDied","Data":"64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca"} Mar 21 09:24:18 crc kubenswrapper[4932]: I0321 09:24:18.457840 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prt49" event={"ID":"962ed459-fdfb-4a0a-844a-c0f609ebb540","Type":"ContainerStarted","Data":"1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8"} Mar 21 09:24:21 crc kubenswrapper[4932]: I0321 09:24:21.490950 4932 generic.go:334] "Generic (PLEG): container finished" podID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerID="1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8" exitCode=0 Mar 21 09:24:21 crc kubenswrapper[4932]: I0321 09:24:21.491000 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prt49" event={"ID":"962ed459-fdfb-4a0a-844a-c0f609ebb540","Type":"ContainerDied","Data":"1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8"} Mar 21 09:24:22 crc kubenswrapper[4932]: I0321 09:24:22.504654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prt49" event={"ID":"962ed459-fdfb-4a0a-844a-c0f609ebb540","Type":"ContainerStarted","Data":"4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128"} Mar 21 09:24:22 crc kubenswrapper[4932]: I0321 09:24:22.523302 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-prt49" podStartSLOduration=3.83179474 podStartE2EDuration="8.523279074s" podCreationTimestamp="2026-03-21 09:24:14 +0000 UTC" firstStartedPulling="2026-03-21 09:24:17.445790399 +0000 UTC m=+1561.040988668" lastFinishedPulling="2026-03-21 09:24:22.137274743 +0000 UTC m=+1565.732473002" observedRunningTime="2026-03-21 09:24:22.522271943 +0000 UTC m=+1566.117470212" watchObservedRunningTime="2026-03-21 09:24:22.523279074 +0000 UTC m=+1566.118477343" Mar 21 09:24:24 crc kubenswrapper[4932]: I0321 09:24:24.703028 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:24:24 crc kubenswrapper[4932]: E0321 09:24:24.703544 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:24:25 crc kubenswrapper[4932]: I0321 09:24:25.180846 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:25 crc kubenswrapper[4932]: I0321 09:24:25.180934 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:26 crc kubenswrapper[4932]: I0321 09:24:26.230417 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-prt49" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="registry-server" probeResult="failure" output=< Mar 21 09:24:26 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 09:24:26 crc kubenswrapper[4932]: > Mar 21 09:24:28 crc kubenswrapper[4932]: I0321 09:24:28.665289 4932 scope.go:117] "RemoveContainer" containerID="e8317985e3807488c4d5152971d4e5f8f4622e87b6a26ed0178a0d398fe681bd" Mar 21 09:24:28 crc kubenswrapper[4932]: I0321 09:24:28.691588 4932 scope.go:117] "RemoveContainer" containerID="590c7745cc01e3d5ed8589f6da0bd35cefde7190849e23121d45b81c4a286f6b" Mar 21 09:24:28 crc kubenswrapper[4932]: I0321 09:24:28.702320 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:24:28 crc kubenswrapper[4932]: E0321 09:24:28.702698 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:24:28 crc kubenswrapper[4932]: I0321 09:24:28.762299 4932 scope.go:117] "RemoveContainer" containerID="3dcc4c4b805e95b0398a0d8a5d04fa36701678b1156c854a537e115731256f7b" Mar 21 09:24:30 crc kubenswrapper[4932]: I0321 09:24:30.225477 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:24:30 crc kubenswrapper[4932]: I0321 09:24:30.225583 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:24:35 crc kubenswrapper[4932]: I0321 09:24:35.235570 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:35 crc kubenswrapper[4932]: I0321 09:24:35.282744 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:35 crc kubenswrapper[4932]: I0321 09:24:35.477905 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-prt49"] Mar 21 09:24:36 crc kubenswrapper[4932]: I0321 09:24:36.656856 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-prt49" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="registry-server" containerID="cri-o://4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128" gracePeriod=2 Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.093785 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.185259 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cph7p\" (UniqueName: \"kubernetes.io/projected/962ed459-fdfb-4a0a-844a-c0f609ebb540-kube-api-access-cph7p\") pod \"962ed459-fdfb-4a0a-844a-c0f609ebb540\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.185454 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-utilities\") pod \"962ed459-fdfb-4a0a-844a-c0f609ebb540\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.185477 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-catalog-content\") pod \"962ed459-fdfb-4a0a-844a-c0f609ebb540\" (UID: \"962ed459-fdfb-4a0a-844a-c0f609ebb540\") " Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.186632 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-utilities" (OuterVolumeSpecName: "utilities") pod "962ed459-fdfb-4a0a-844a-c0f609ebb540" (UID: "962ed459-fdfb-4a0a-844a-c0f609ebb540"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.190885 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962ed459-fdfb-4a0a-844a-c0f609ebb540-kube-api-access-cph7p" (OuterVolumeSpecName: "kube-api-access-cph7p") pod "962ed459-fdfb-4a0a-844a-c0f609ebb540" (UID: "962ed459-fdfb-4a0a-844a-c0f609ebb540"). InnerVolumeSpecName "kube-api-access-cph7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.287750 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cph7p\" (UniqueName: \"kubernetes.io/projected/962ed459-fdfb-4a0a-844a-c0f609ebb540-kube-api-access-cph7p\") on node \"crc\" DevicePath \"\"" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.288048 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.322180 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "962ed459-fdfb-4a0a-844a-c0f609ebb540" (UID: "962ed459-fdfb-4a0a-844a-c0f609ebb540"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.390146 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ed459-fdfb-4a0a-844a-c0f609ebb540-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.673647 4932 generic.go:334] "Generic (PLEG): container finished" podID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerID="4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128" exitCode=0 Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.673705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prt49" event={"ID":"962ed459-fdfb-4a0a-844a-c0f609ebb540","Type":"ContainerDied","Data":"4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128"} Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.673747 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prt49" event={"ID":"962ed459-fdfb-4a0a-844a-c0f609ebb540","Type":"ContainerDied","Data":"f03529b7922d9bbdae3ba4774b235facf7121d5198714828b189c4adde5fe645"} Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.673771 4932 scope.go:117] "RemoveContainer" containerID="4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.674339 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prt49" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.696889 4932 scope.go:117] "RemoveContainer" containerID="1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.711268 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:24:37 crc kubenswrapper[4932]: E0321 09:24:37.711555 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.718644 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-prt49"] Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.726126 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-prt49"] Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.731903 4932 scope.go:117] "RemoveContainer" containerID="64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.769735 4932 scope.go:117] "RemoveContainer" containerID="4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128" Mar 21 09:24:37 crc kubenswrapper[4932]: E0321 09:24:37.770298 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128\": container with ID starting with 4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128 not found: ID does not exist" containerID="4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.770391 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128"} err="failed to get container status \"4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128\": rpc error: code = NotFound desc = could not find container \"4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128\": container with ID starting with 4d95f3aa2d415b68e0d414d08efb71ac429fbe1c2dcf0c0e4bb555d727b9d128 not found: ID does not exist" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.770428 4932 scope.go:117] "RemoveContainer" containerID="1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8" Mar 21 09:24:37 crc kubenswrapper[4932]: E0321 09:24:37.771086 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8\": container with ID starting with 1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8 not found: ID does not exist" containerID="1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.771143 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8"} err="failed to get container status \"1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8\": rpc error: code = NotFound desc = could not find container \"1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8\": container with ID starting with 1d2a7022740114ebc281144aebaba831cde15f83acd23ca6bdec9b0d1f095fa8 not found: ID does not exist" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.771174 4932 scope.go:117] "RemoveContainer" containerID="64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca" Mar 21 09:24:37 crc kubenswrapper[4932]: E0321 09:24:37.771501 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca\": container with ID starting with 64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca not found: ID does not exist" containerID="64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca" Mar 21 09:24:37 crc kubenswrapper[4932]: I0321 09:24:37.771531 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca"} err="failed to get container status \"64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca\": rpc error: code = NotFound desc = could not find container \"64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca\": container with ID starting with 64cb80c9c6e8f4a4857a274a99c34bb75196504194a7cafabe26dd143b6e63ca not found: ID does not exist" Mar 21 09:24:39 crc kubenswrapper[4932]: I0321 09:24:39.716384 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" path="/var/lib/kubelet/pods/962ed459-fdfb-4a0a-844a-c0f609ebb540/volumes" Mar 21 09:24:43 crc kubenswrapper[4932]: I0321 09:24:43.704008 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:24:43 crc kubenswrapper[4932]: E0321 09:24:43.704825 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:24:52 crc kubenswrapper[4932]: I0321 09:24:52.703012 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:24:52 crc kubenswrapper[4932]: E0321 09:24:52.704212 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:24:58 crc kubenswrapper[4932]: I0321 09:24:58.703122 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:24:58 crc kubenswrapper[4932]: E0321 09:24:58.703840 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:25:00 crc kubenswrapper[4932]: I0321 09:25:00.226340 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:25:00 crc kubenswrapper[4932]: I0321 09:25:00.226688 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:25:03 crc kubenswrapper[4932]: I0321 09:25:03.703886 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:25:03 crc kubenswrapper[4932]: E0321 09:25:03.704797 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:25:10 crc kubenswrapper[4932]: I0321 09:25:10.703181 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:25:10 crc kubenswrapper[4932]: E0321 09:25:10.704229 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:25:18 crc kubenswrapper[4932]: I0321 09:25:18.703088 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:25:18 crc kubenswrapper[4932]: E0321 09:25:18.703859 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:25:21 crc kubenswrapper[4932]: I0321 09:25:21.702601 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:25:21 crc kubenswrapper[4932]: E0321 09:25:21.703615 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:25:28 crc kubenswrapper[4932]: I0321 09:25:28.884310 4932 scope.go:117] "RemoveContainer" containerID="bc8fe310a3f81989e870f3cca4f6e5523a75b5b8fa09b2ea7ffad2deafb5f531" Mar 21 09:25:28 crc kubenswrapper[4932]: I0321 09:25:28.929961 4932 scope.go:117] "RemoveContainer" containerID="f1381b54addd3f3d69a7babc6def920636f223b1b9cf888ae4126ef6a122ff6c" Mar 21 09:25:28 crc kubenswrapper[4932]: I0321 09:25:28.981886 4932 scope.go:117] "RemoveContainer" containerID="6825e77cd1f92edb42c5fcbdb87f997ba1ef386603eea11a7e369fe543582f30" Mar 21 09:25:29 crc kubenswrapper[4932]: I0321 09:25:29.002218 4932 scope.go:117] "RemoveContainer" containerID="4d5c3438385fad79e4841b944a2b7573bd9d689f5005a07dc6ee241bdff0269d" Mar 21 09:25:29 crc kubenswrapper[4932]: I0321 09:25:29.034916 4932 scope.go:117] "RemoveContainer" containerID="77b5a0246f496d6b30f7e1f49de718ee5bcb6edc0034140f1edc7488d6ac63d1" Mar 21 09:25:29 crc kubenswrapper[4932]: I0321 09:25:29.703095 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:25:29 crc kubenswrapper[4932]: E0321 09:25:29.703884 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:25:30 crc kubenswrapper[4932]: I0321 09:25:30.225738 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:25:30 crc kubenswrapper[4932]: I0321 09:25:30.225805 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:25:30 crc kubenswrapper[4932]: I0321 09:25:30.225847 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:25:30 crc kubenswrapper[4932]: I0321 09:25:30.226680 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:25:30 crc kubenswrapper[4932]: I0321 09:25:30.226743 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" gracePeriod=600 Mar 21 09:25:30 crc kubenswrapper[4932]: E0321 09:25:30.347636 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:25:31 crc kubenswrapper[4932]: I0321 09:25:31.205785 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" exitCode=0 Mar 21 09:25:31 crc kubenswrapper[4932]: I0321 09:25:31.205830 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966"} Mar 21 09:25:31 crc kubenswrapper[4932]: I0321 09:25:31.206171 4932 scope.go:117] "RemoveContainer" containerID="2d3170911019ffc2d29f28c120dc321f5dd9686c4e0cf0c0353759cae75828b5" Mar 21 09:25:31 crc kubenswrapper[4932]: I0321 09:25:31.206913 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:25:31 crc kubenswrapper[4932]: E0321 09:25:31.207175 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:25:36 crc kubenswrapper[4932]: I0321 09:25:36.703572 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:25:36 crc kubenswrapper[4932]: E0321 09:25:36.704492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:25:42 crc kubenswrapper[4932]: I0321 09:25:42.703147 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:25:42 crc kubenswrapper[4932]: E0321 09:25:42.703965 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:25:45 crc kubenswrapper[4932]: I0321 09:25:45.703461 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:25:45 crc kubenswrapper[4932]: E0321 09:25:45.704197 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:25:50 crc kubenswrapper[4932]: I0321 09:25:50.703446 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:25:50 crc kubenswrapper[4932]: E0321 09:25:50.704361 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:25:53 crc kubenswrapper[4932]: I0321 09:25:53.713182 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:25:53 crc kubenswrapper[4932]: E0321 09:25:53.720023 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.143540 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568086-g84hp"] Mar 21 09:26:00 crc kubenswrapper[4932]: E0321 09:26:00.144489 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="extract-utilities" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.144503 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="extract-utilities" Mar 21 09:26:00 crc kubenswrapper[4932]: E0321 09:26:00.144516 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="registry-server" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.144524 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="registry-server" Mar 21 09:26:00 crc kubenswrapper[4932]: E0321 09:26:00.144561 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="extract-content" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.144568 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="extract-content" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.144759 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="962ed459-fdfb-4a0a-844a-c0f609ebb540" containerName="registry-server" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.145553 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568086-g84hp" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.148215 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.148215 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.148362 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.157331 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568086-g84hp"] Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.343027 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6q98\" (UniqueName: \"kubernetes.io/projected/b0d274d9-c8e6-47e4-afc3-93c6023540da-kube-api-access-n6q98\") pod \"auto-csr-approver-29568086-g84hp\" (UID: \"b0d274d9-c8e6-47e4-afc3-93c6023540da\") " pod="openshift-infra/auto-csr-approver-29568086-g84hp" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.445195 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6q98\" (UniqueName: \"kubernetes.io/projected/b0d274d9-c8e6-47e4-afc3-93c6023540da-kube-api-access-n6q98\") pod \"auto-csr-approver-29568086-g84hp\" (UID: \"b0d274d9-c8e6-47e4-afc3-93c6023540da\") " pod="openshift-infra/auto-csr-approver-29568086-g84hp" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.463582 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6q98\" (UniqueName: \"kubernetes.io/projected/b0d274d9-c8e6-47e4-afc3-93c6023540da-kube-api-access-n6q98\") pod \"auto-csr-approver-29568086-g84hp\" (UID: \"b0d274d9-c8e6-47e4-afc3-93c6023540da\") " pod="openshift-infra/auto-csr-approver-29568086-g84hp" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.465224 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568086-g84hp" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.705553 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:26:00 crc kubenswrapper[4932]: E0321 09:26:00.706427 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:26:00 crc kubenswrapper[4932]: I0321 09:26:00.908691 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568086-g84hp"] Mar 21 09:26:01 crc kubenswrapper[4932]: I0321 09:26:01.497298 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568086-g84hp" event={"ID":"b0d274d9-c8e6-47e4-afc3-93c6023540da","Type":"ContainerStarted","Data":"7976df35bfe787d8d16ce7d4a1e412974bacef531777480c01b0224a78c9d60a"} Mar 21 09:26:02 crc kubenswrapper[4932]: I0321 09:26:02.509079 4932 generic.go:334] "Generic (PLEG): container finished" podID="b0d274d9-c8e6-47e4-afc3-93c6023540da" containerID="cfa2cdea727b3580a314ed7bcf3299c97424946b4a17a8d8149f37101a7c553d" exitCode=0 Mar 21 09:26:02 crc kubenswrapper[4932]: I0321 09:26:02.509139 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568086-g84hp" event={"ID":"b0d274d9-c8e6-47e4-afc3-93c6023540da","Type":"ContainerDied","Data":"cfa2cdea727b3580a314ed7bcf3299c97424946b4a17a8d8149f37101a7c553d"} Mar 21 09:26:03 crc kubenswrapper[4932]: I0321 09:26:03.710109 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:26:03 crc kubenswrapper[4932]: I0321 09:26:03.894537 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568086-g84hp" Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.021737 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6q98\" (UniqueName: \"kubernetes.io/projected/b0d274d9-c8e6-47e4-afc3-93c6023540da-kube-api-access-n6q98\") pod \"b0d274d9-c8e6-47e4-afc3-93c6023540da\" (UID: \"b0d274d9-c8e6-47e4-afc3-93c6023540da\") " Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.028187 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d274d9-c8e6-47e4-afc3-93c6023540da-kube-api-access-n6q98" (OuterVolumeSpecName: "kube-api-access-n6q98") pod "b0d274d9-c8e6-47e4-afc3-93c6023540da" (UID: "b0d274d9-c8e6-47e4-afc3-93c6023540da"). InnerVolumeSpecName "kube-api-access-n6q98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.124586 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6q98\" (UniqueName: \"kubernetes.io/projected/b0d274d9-c8e6-47e4-afc3-93c6023540da-kube-api-access-n6q98\") on node \"crc\" DevicePath \"\"" Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.530131 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568086-g84hp" event={"ID":"b0d274d9-c8e6-47e4-afc3-93c6023540da","Type":"ContainerDied","Data":"7976df35bfe787d8d16ce7d4a1e412974bacef531777480c01b0224a78c9d60a"} Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.530173 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7976df35bfe787d8d16ce7d4a1e412974bacef531777480c01b0224a78c9d60a" Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.530171 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568086-g84hp" Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.537634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9"} Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.702985 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:26:04 crc kubenswrapper[4932]: E0321 09:26:04.703217 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.969563 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568080-4t8ff"] Mar 21 09:26:04 crc kubenswrapper[4932]: I0321 09:26:04.979763 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568080-4t8ff"] Mar 21 09:26:05 crc kubenswrapper[4932]: I0321 09:26:05.713149 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f72379-7d9f-4c04-b560-ba0495427abd" path="/var/lib/kubelet/pods/24f72379-7d9f-4c04-b560-ba0495427abd/volumes" Mar 21 09:26:07 crc kubenswrapper[4932]: I0321 09:26:07.741267 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:26:07 crc kubenswrapper[4932]: I0321 09:26:07.741606 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:26:12 crc kubenswrapper[4932]: I0321 09:26:12.611906 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" exitCode=1 Mar 21 09:26:12 crc kubenswrapper[4932]: I0321 09:26:12.611990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9"} Mar 21 09:26:12 crc kubenswrapper[4932]: I0321 09:26:12.612515 4932 scope.go:117] "RemoveContainer" containerID="6383fec2647766822d9223127baac41fc0b1ae4a5aa94797cdcc6184efcea8a1" Mar 21 09:26:12 crc kubenswrapper[4932]: I0321 09:26:12.613368 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:26:12 crc kubenswrapper[4932]: E0321 09:26:12.613724 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:26:15 crc kubenswrapper[4932]: I0321 09:26:15.703453 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:26:15 crc kubenswrapper[4932]: E0321 09:26:15.704374 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:26:17 crc kubenswrapper[4932]: I0321 09:26:17.740474 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:26:17 crc kubenswrapper[4932]: I0321 09:26:17.740814 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:26:17 crc kubenswrapper[4932]: I0321 09:26:17.742124 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:26:17 crc kubenswrapper[4932]: E0321 09:26:17.742401 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:26:18 crc kubenswrapper[4932]: I0321 09:26:18.703472 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:26:19 crc kubenswrapper[4932]: I0321 09:26:19.682602 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0"} Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.709997 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:26:27 crc kubenswrapper[4932]: E0321 09:26:27.710930 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.769892 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" exitCode=1 Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.769939 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0"} Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.769977 4932 scope.go:117] "RemoveContainer" containerID="d910ca15e2debdf2bed821b82810a84ed2afbb9a43bbd773d0a2d3e81796c5f5" Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.770870 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:26:27 crc kubenswrapper[4932]: E0321 09:26:27.771095 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.948366 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.948430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.948444 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:26:27 crc kubenswrapper[4932]: I0321 09:26:27.948458 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:26:28 crc kubenswrapper[4932]: I0321 09:26:28.781133 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:26:28 crc kubenswrapper[4932]: E0321 09:26:28.781685 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:26:29 crc kubenswrapper[4932]: I0321 09:26:29.133959 4932 scope.go:117] "RemoveContainer" containerID="58c49e2d8620ae87f0dc2cdec7d3fe78c467c74bf39c7187b86fb9c6d5d68fe9" Mar 21 09:26:29 crc kubenswrapper[4932]: I0321 09:26:29.160996 4932 scope.go:117] "RemoveContainer" containerID="3e0ea409fd465250c2a767fbf2c1c6fb405342320c958b7b3dcee33a55c9dfba" Mar 21 09:26:29 crc kubenswrapper[4932]: I0321 09:26:29.183956 4932 scope.go:117] "RemoveContainer" containerID="53d94f95343e4797016ba4b73087ef254817b72711e916d867684b782edd6634" Mar 21 09:26:32 crc kubenswrapper[4932]: I0321 09:26:32.703096 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:26:32 crc kubenswrapper[4932]: E0321 09:26:32.704442 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:26:40 crc kubenswrapper[4932]: I0321 09:26:40.703280 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:26:40 crc kubenswrapper[4932]: E0321 09:26:40.704198 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:26:41 crc kubenswrapper[4932]: I0321 09:26:41.703014 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:26:41 crc kubenswrapper[4932]: E0321 09:26:41.703616 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:26:45 crc kubenswrapper[4932]: I0321 09:26:45.702209 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:26:45 crc kubenswrapper[4932]: E0321 09:26:45.702808 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:26:52 crc kubenswrapper[4932]: I0321 09:26:52.703172 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:26:52 crc kubenswrapper[4932]: E0321 09:26:52.704190 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:26:54 crc kubenswrapper[4932]: I0321 09:26:54.702765 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:26:54 crc kubenswrapper[4932]: E0321 09:26:54.703394 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:26:58 crc kubenswrapper[4932]: I0321 09:26:58.702331 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:26:58 crc kubenswrapper[4932]: E0321 09:26:58.703122 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.422436 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k8zdt"] Mar 21 09:27:02 crc kubenswrapper[4932]: E0321 09:27:02.423440 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d274d9-c8e6-47e4-afc3-93c6023540da" containerName="oc" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.423458 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d274d9-c8e6-47e4-afc3-93c6023540da" containerName="oc" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.423733 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d274d9-c8e6-47e4-afc3-93c6023540da" containerName="oc" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.425637 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.449892 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8zdt"] Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.573401 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-utilities\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.573453 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-catalog-content\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.573559 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvzn\" (UniqueName: \"kubernetes.io/projected/247611bd-2b17-4da2-a4f4-51cd8c100f26-kube-api-access-9vvzn\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.676165 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvzn\" (UniqueName: \"kubernetes.io/projected/247611bd-2b17-4da2-a4f4-51cd8c100f26-kube-api-access-9vvzn\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.676333 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-utilities\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.676371 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-catalog-content\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.676836 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-utilities\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.676847 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-catalog-content\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.697588 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvzn\" (UniqueName: \"kubernetes.io/projected/247611bd-2b17-4da2-a4f4-51cd8c100f26-kube-api-access-9vvzn\") pod \"community-operators-k8zdt\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:02 crc kubenswrapper[4932]: I0321 09:27:02.746182 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:03 crc kubenswrapper[4932]: W0321 09:27:03.215649 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod247611bd_2b17_4da2_a4f4_51cd8c100f26.slice/crio-867dd3a0a8ed90decfc17e92e9172e64668fc846609c7ffa2cfd60af1d75735e WatchSource:0}: Error finding container 867dd3a0a8ed90decfc17e92e9172e64668fc846609c7ffa2cfd60af1d75735e: Status 404 returned error can't find the container with id 867dd3a0a8ed90decfc17e92e9172e64668fc846609c7ffa2cfd60af1d75735e Mar 21 09:27:03 crc kubenswrapper[4932]: I0321 09:27:03.216866 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8zdt"] Mar 21 09:27:04 crc kubenswrapper[4932]: I0321 09:27:04.127058 4932 generic.go:334] "Generic (PLEG): container finished" podID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerID="4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a" exitCode=0 Mar 21 09:27:04 crc kubenswrapper[4932]: I0321 09:27:04.127181 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8zdt" event={"ID":"247611bd-2b17-4da2-a4f4-51cd8c100f26","Type":"ContainerDied","Data":"4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a"} Mar 21 09:27:04 crc kubenswrapper[4932]: I0321 09:27:04.127425 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8zdt" event={"ID":"247611bd-2b17-4da2-a4f4-51cd8c100f26","Type":"ContainerStarted","Data":"867dd3a0a8ed90decfc17e92e9172e64668fc846609c7ffa2cfd60af1d75735e"} Mar 21 09:27:05 crc kubenswrapper[4932]: I0321 09:27:05.140187 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8zdt" event={"ID":"247611bd-2b17-4da2-a4f4-51cd8c100f26","Type":"ContainerStarted","Data":"baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda"} Mar 21 09:27:05 crc kubenswrapper[4932]: I0321 09:27:05.702271 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:27:05 crc kubenswrapper[4932]: E0321 09:27:05.702508 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:27:06 crc kubenswrapper[4932]: I0321 09:27:06.152017 4932 generic.go:334] "Generic (PLEG): container finished" podID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerID="baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda" exitCode=0 Mar 21 09:27:06 crc kubenswrapper[4932]: I0321 09:27:06.152096 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8zdt" event={"ID":"247611bd-2b17-4da2-a4f4-51cd8c100f26","Type":"ContainerDied","Data":"baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda"} Mar 21 09:27:06 crc kubenswrapper[4932]: I0321 09:27:06.702604 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:27:06 crc kubenswrapper[4932]: E0321 09:27:06.703119 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:27:07 crc kubenswrapper[4932]: I0321 09:27:07.166280 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8zdt" event={"ID":"247611bd-2b17-4da2-a4f4-51cd8c100f26","Type":"ContainerStarted","Data":"8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72"} Mar 21 09:27:07 crc kubenswrapper[4932]: I0321 09:27:07.190195 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k8zdt" podStartSLOduration=2.791348889 podStartE2EDuration="5.190172875s" podCreationTimestamp="2026-03-21 09:27:02 +0000 UTC" firstStartedPulling="2026-03-21 09:27:04.131880248 +0000 UTC m=+1727.727078527" lastFinishedPulling="2026-03-21 09:27:06.530704254 +0000 UTC m=+1730.125902513" observedRunningTime="2026-03-21 09:27:07.181106858 +0000 UTC m=+1730.776305147" watchObservedRunningTime="2026-03-21 09:27:07.190172875 +0000 UTC m=+1730.785371144" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.586004 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dcrzb"] Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.588190 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.601821 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dcrzb"] Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.740043 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6pc\" (UniqueName: \"kubernetes.io/projected/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-kube-api-access-gs6pc\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.740609 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-utilities\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.740966 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-catalog-content\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.842851 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-catalog-content\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.842963 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs6pc\" (UniqueName: \"kubernetes.io/projected/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-kube-api-access-gs6pc\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.843140 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-utilities\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.844047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-catalog-content\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.844071 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-utilities\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.868208 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs6pc\" (UniqueName: \"kubernetes.io/projected/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-kube-api-access-gs6pc\") pod \"certified-operators-dcrzb\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:09 crc kubenswrapper[4932]: I0321 09:27:09.909099 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:10 crc kubenswrapper[4932]: W0321 09:27:10.389621 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb33b21d7_5f2e_48a6_8964_a1ec697d28c9.slice/crio-c1074d1d2002487e67accafc5510c6fa93c2235055f45ea098a396bef905a14b WatchSource:0}: Error finding container c1074d1d2002487e67accafc5510c6fa93c2235055f45ea098a396bef905a14b: Status 404 returned error can't find the container with id c1074d1d2002487e67accafc5510c6fa93c2235055f45ea098a396bef905a14b Mar 21 09:27:10 crc kubenswrapper[4932]: I0321 09:27:10.392176 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dcrzb"] Mar 21 09:27:11 crc kubenswrapper[4932]: I0321 09:27:11.218032 4932 generic.go:334] "Generic (PLEG): container finished" podID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerID="9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3" exitCode=0 Mar 21 09:27:11 crc kubenswrapper[4932]: I0321 09:27:11.218262 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcrzb" event={"ID":"b33b21d7-5f2e-48a6-8964-a1ec697d28c9","Type":"ContainerDied","Data":"9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3"} Mar 21 09:27:11 crc kubenswrapper[4932]: I0321 09:27:11.218339 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcrzb" event={"ID":"b33b21d7-5f2e-48a6-8964-a1ec697d28c9","Type":"ContainerStarted","Data":"c1074d1d2002487e67accafc5510c6fa93c2235055f45ea098a396bef905a14b"} Mar 21 09:27:12 crc kubenswrapper[4932]: I0321 09:27:12.230652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcrzb" event={"ID":"b33b21d7-5f2e-48a6-8964-a1ec697d28c9","Type":"ContainerStarted","Data":"2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387"} Mar 21 09:27:12 crc kubenswrapper[4932]: I0321 09:27:12.747451 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:12 crc kubenswrapper[4932]: I0321 09:27:12.747534 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:12 crc kubenswrapper[4932]: I0321 09:27:12.802599 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:13 crc kubenswrapper[4932]: I0321 09:27:13.245388 4932 generic.go:334] "Generic (PLEG): container finished" podID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerID="2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387" exitCode=0 Mar 21 09:27:13 crc kubenswrapper[4932]: I0321 09:27:13.245441 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcrzb" event={"ID":"b33b21d7-5f2e-48a6-8964-a1ec697d28c9","Type":"ContainerDied","Data":"2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387"} Mar 21 09:27:13 crc kubenswrapper[4932]: I0321 09:27:13.303109 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:13 crc kubenswrapper[4932]: I0321 09:27:13.702780 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:27:13 crc kubenswrapper[4932]: E0321 09:27:13.703035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:27:14 crc kubenswrapper[4932]: I0321 09:27:14.260366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcrzb" event={"ID":"b33b21d7-5f2e-48a6-8964-a1ec697d28c9","Type":"ContainerStarted","Data":"e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b"} Mar 21 09:27:14 crc kubenswrapper[4932]: I0321 09:27:14.283830 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dcrzb" podStartSLOduration=2.7818529229999998 podStartE2EDuration="5.283807746s" podCreationTimestamp="2026-03-21 09:27:09 +0000 UTC" firstStartedPulling="2026-03-21 09:27:11.220432137 +0000 UTC m=+1734.815630406" lastFinishedPulling="2026-03-21 09:27:13.72238696 +0000 UTC m=+1737.317585229" observedRunningTime="2026-03-21 09:27:14.275634658 +0000 UTC m=+1737.870832917" watchObservedRunningTime="2026-03-21 09:27:14.283807746 +0000 UTC m=+1737.879006005" Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.183148 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8zdt"] Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.268388 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k8zdt" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerName="registry-server" containerID="cri-o://8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72" gracePeriod=2 Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.725398 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.866271 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vvzn\" (UniqueName: \"kubernetes.io/projected/247611bd-2b17-4da2-a4f4-51cd8c100f26-kube-api-access-9vvzn\") pod \"247611bd-2b17-4da2-a4f4-51cd8c100f26\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.866832 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-utilities\") pod \"247611bd-2b17-4da2-a4f4-51cd8c100f26\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.866855 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-catalog-content\") pod \"247611bd-2b17-4da2-a4f4-51cd8c100f26\" (UID: \"247611bd-2b17-4da2-a4f4-51cd8c100f26\") " Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.867756 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-utilities" (OuterVolumeSpecName: "utilities") pod "247611bd-2b17-4da2-a4f4-51cd8c100f26" (UID: "247611bd-2b17-4da2-a4f4-51cd8c100f26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.871768 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247611bd-2b17-4da2-a4f4-51cd8c100f26-kube-api-access-9vvzn" (OuterVolumeSpecName: "kube-api-access-9vvzn") pod "247611bd-2b17-4da2-a4f4-51cd8c100f26" (UID: "247611bd-2b17-4da2-a4f4-51cd8c100f26"). InnerVolumeSpecName "kube-api-access-9vvzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.924160 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "247611bd-2b17-4da2-a4f4-51cd8c100f26" (UID: "247611bd-2b17-4da2-a4f4-51cd8c100f26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.969062 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vvzn\" (UniqueName: \"kubernetes.io/projected/247611bd-2b17-4da2-a4f4-51cd8c100f26-kube-api-access-9vvzn\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.969313 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:15 crc kubenswrapper[4932]: I0321 09:27:15.969569 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247611bd-2b17-4da2-a4f4-51cd8c100f26-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.281381 4932 generic.go:334] "Generic (PLEG): container finished" podID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerID="8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72" exitCode=0 Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.281413 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8zdt" event={"ID":"247611bd-2b17-4da2-a4f4-51cd8c100f26","Type":"ContainerDied","Data":"8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72"} Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.281457 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8zdt" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.281485 4932 scope.go:117] "RemoveContainer" containerID="8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.281470 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8zdt" event={"ID":"247611bd-2b17-4da2-a4f4-51cd8c100f26","Type":"ContainerDied","Data":"867dd3a0a8ed90decfc17e92e9172e64668fc846609c7ffa2cfd60af1d75735e"} Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.304365 4932 scope.go:117] "RemoveContainer" containerID="baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.314371 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8zdt"] Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.323575 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k8zdt"] Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.354809 4932 scope.go:117] "RemoveContainer" containerID="4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.379846 4932 scope.go:117] "RemoveContainer" containerID="8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72" Mar 21 09:27:16 crc kubenswrapper[4932]: E0321 09:27:16.380283 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72\": container with ID starting with 8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72 not found: ID does not exist" containerID="8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.380330 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72"} err="failed to get container status \"8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72\": rpc error: code = NotFound desc = could not find container \"8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72\": container with ID starting with 8d0771775abff6dd0a79bd105f6c29593710dc95ac3ceaa2bf91ec80c83d7a72 not found: ID does not exist" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.380377 4932 scope.go:117] "RemoveContainer" containerID="baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda" Mar 21 09:27:16 crc kubenswrapper[4932]: E0321 09:27:16.380866 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda\": container with ID starting with baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda not found: ID does not exist" containerID="baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.380904 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda"} err="failed to get container status \"baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda\": rpc error: code = NotFound desc = could not find container \"baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda\": container with ID starting with baaa3a9eaff08feaba401501bc2f7943bfcb63a004d1c2e9e3e1261545bb0eda not found: ID does not exist" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.380931 4932 scope.go:117] "RemoveContainer" containerID="4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a" Mar 21 09:27:16 crc kubenswrapper[4932]: E0321 09:27:16.381252 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a\": container with ID starting with 4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a not found: ID does not exist" containerID="4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a" Mar 21 09:27:16 crc kubenswrapper[4932]: I0321 09:27:16.381282 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a"} err="failed to get container status \"4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a\": rpc error: code = NotFound desc = could not find container \"4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a\": container with ID starting with 4fc6f70e7a7f2d34b7da4df8235054b4827565e0ed048814e9647702cdf6bd9a not found: ID does not exist" Mar 21 09:27:17 crc kubenswrapper[4932]: I0321 09:27:17.713292 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" path="/var/lib/kubelet/pods/247611bd-2b17-4da2-a4f4-51cd8c100f26/volumes" Mar 21 09:27:19 crc kubenswrapper[4932]: I0321 09:27:19.702572 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:27:19 crc kubenswrapper[4932]: I0321 09:27:19.703117 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:27:19 crc kubenswrapper[4932]: E0321 09:27:19.703160 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:27:19 crc kubenswrapper[4932]: E0321 09:27:19.703480 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:27:19 crc kubenswrapper[4932]: I0321 09:27:19.909503 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:19 crc kubenswrapper[4932]: I0321 09:27:19.909575 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:19 crc kubenswrapper[4932]: I0321 09:27:19.959540 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:20 crc kubenswrapper[4932]: I0321 09:27:20.372589 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:20 crc kubenswrapper[4932]: I0321 09:27:20.426483 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dcrzb"] Mar 21 09:27:22 crc kubenswrapper[4932]: I0321 09:27:22.348652 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dcrzb" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerName="registry-server" containerID="cri-o://e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b" gracePeriod=2 Mar 21 09:27:22 crc kubenswrapper[4932]: I0321 09:27:22.793225 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:22 crc kubenswrapper[4932]: I0321 09:27:22.909120 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-catalog-content\") pod \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " Mar 21 09:27:22 crc kubenswrapper[4932]: I0321 09:27:22.909260 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs6pc\" (UniqueName: \"kubernetes.io/projected/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-kube-api-access-gs6pc\") pod \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " Mar 21 09:27:22 crc kubenswrapper[4932]: I0321 09:27:22.909939 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-utilities\") pod \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\" (UID: \"b33b21d7-5f2e-48a6-8964-a1ec697d28c9\") " Mar 21 09:27:22 crc kubenswrapper[4932]: I0321 09:27:22.910910 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-utilities" (OuterVolumeSpecName: "utilities") pod "b33b21d7-5f2e-48a6-8964-a1ec697d28c9" (UID: "b33b21d7-5f2e-48a6-8964-a1ec697d28c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:27:22 crc kubenswrapper[4932]: I0321 09:27:22.914817 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-kube-api-access-gs6pc" (OuterVolumeSpecName: "kube-api-access-gs6pc") pod "b33b21d7-5f2e-48a6-8964-a1ec697d28c9" (UID: "b33b21d7-5f2e-48a6-8964-a1ec697d28c9"). InnerVolumeSpecName "kube-api-access-gs6pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:27:22 crc kubenswrapper[4932]: I0321 09:27:22.955169 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b33b21d7-5f2e-48a6-8964-a1ec697d28c9" (UID: "b33b21d7-5f2e-48a6-8964-a1ec697d28c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.013079 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.013377 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs6pc\" (UniqueName: \"kubernetes.io/projected/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-kube-api-access-gs6pc\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.013455 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b33b21d7-5f2e-48a6-8964-a1ec697d28c9-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.360739 4932 generic.go:334] "Generic (PLEG): container finished" podID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerID="e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b" exitCode=0 Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.360818 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcrzb" event={"ID":"b33b21d7-5f2e-48a6-8964-a1ec697d28c9","Type":"ContainerDied","Data":"e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b"} Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.360827 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcrzb" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.360862 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcrzb" event={"ID":"b33b21d7-5f2e-48a6-8964-a1ec697d28c9","Type":"ContainerDied","Data":"c1074d1d2002487e67accafc5510c6fa93c2235055f45ea098a396bef905a14b"} Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.360885 4932 scope.go:117] "RemoveContainer" containerID="e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.394790 4932 scope.go:117] "RemoveContainer" containerID="2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.395333 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dcrzb"] Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.407364 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dcrzb"] Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.417464 4932 scope.go:117] "RemoveContainer" containerID="9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.476531 4932 scope.go:117] "RemoveContainer" containerID="e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b" Mar 21 09:27:23 crc kubenswrapper[4932]: E0321 09:27:23.477037 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b\": container with ID starting with e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b not found: ID does not exist" containerID="e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.477100 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b"} err="failed to get container status \"e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b\": rpc error: code = NotFound desc = could not find container \"e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b\": container with ID starting with e23076b497ab9934ea4c5cfb3e595fed3ef7d831f60e7b523eaa3d39c4b91a4b not found: ID does not exist" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.477141 4932 scope.go:117] "RemoveContainer" containerID="2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387" Mar 21 09:27:23 crc kubenswrapper[4932]: E0321 09:27:23.477474 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387\": container with ID starting with 2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387 not found: ID does not exist" containerID="2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.477498 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387"} err="failed to get container status \"2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387\": rpc error: code = NotFound desc = could not find container \"2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387\": container with ID starting with 2a75d9ef48a618f9f288063a49249f678d1e1347527d81251a2aaf12eb1b3387 not found: ID does not exist" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.477514 4932 scope.go:117] "RemoveContainer" containerID="9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3" Mar 21 09:27:23 crc kubenswrapper[4932]: E0321 09:27:23.477828 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3\": container with ID starting with 9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3 not found: ID does not exist" containerID="9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.477867 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3"} err="failed to get container status \"9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3\": rpc error: code = NotFound desc = could not find container \"9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3\": container with ID starting with 9ec09675a8d1e6904bac194de4642d9234e38d8f0610d1dd52a6f5e695fb09e3 not found: ID does not exist" Mar 21 09:27:23 crc kubenswrapper[4932]: I0321 09:27:23.714210 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" path="/var/lib/kubelet/pods/b33b21d7-5f2e-48a6-8964-a1ec697d28c9/volumes" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.605057 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qt9n2"] Mar 21 09:27:25 crc kubenswrapper[4932]: E0321 09:27:25.607817 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerName="extract-utilities" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.607835 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerName="extract-utilities" Mar 21 09:27:25 crc kubenswrapper[4932]: E0321 09:27:25.607848 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerName="registry-server" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.607853 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerName="registry-server" Mar 21 09:27:25 crc kubenswrapper[4932]: E0321 09:27:25.607870 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerName="extract-utilities" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.607878 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerName="extract-utilities" Mar 21 09:27:25 crc kubenswrapper[4932]: E0321 09:27:25.607894 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerName="extract-content" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.607900 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerName="extract-content" Mar 21 09:27:25 crc kubenswrapper[4932]: E0321 09:27:25.607918 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerName="extract-content" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.607924 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerName="extract-content" Mar 21 09:27:25 crc kubenswrapper[4932]: E0321 09:27:25.607931 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerName="registry-server" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.607937 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerName="registry-server" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.608118 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="247611bd-2b17-4da2-a4f4-51cd8c100f26" containerName="registry-server" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.608142 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33b21d7-5f2e-48a6-8964-a1ec697d28c9" containerName="registry-server" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.609574 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.624327 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qt9n2"] Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.669178 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-utilities\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.669249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-catalog-content\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.669307 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxn9\" (UniqueName: \"kubernetes.io/projected/c458ffff-a3c3-47ba-a42c-533e29d29b19-kube-api-access-nwxn9\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.771016 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-utilities\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.771083 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-catalog-content\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.771152 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxn9\" (UniqueName: \"kubernetes.io/projected/c458ffff-a3c3-47ba-a42c-533e29d29b19-kube-api-access-nwxn9\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.771620 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-utilities\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.771899 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-catalog-content\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.795498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxn9\" (UniqueName: \"kubernetes.io/projected/c458ffff-a3c3-47ba-a42c-533e29d29b19-kube-api-access-nwxn9\") pod \"redhat-marketplace-qt9n2\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:25 crc kubenswrapper[4932]: I0321 09:27:25.938146 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:26 crc kubenswrapper[4932]: I0321 09:27:26.416165 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qt9n2"] Mar 21 09:27:27 crc kubenswrapper[4932]: I0321 09:27:27.403382 4932 generic.go:334] "Generic (PLEG): container finished" podID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerID="5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030" exitCode=0 Mar 21 09:27:27 crc kubenswrapper[4932]: I0321 09:27:27.403488 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qt9n2" event={"ID":"c458ffff-a3c3-47ba-a42c-533e29d29b19","Type":"ContainerDied","Data":"5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030"} Mar 21 09:27:27 crc kubenswrapper[4932]: I0321 09:27:27.403693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qt9n2" event={"ID":"c458ffff-a3c3-47ba-a42c-533e29d29b19","Type":"ContainerStarted","Data":"56785f511d51101c44231b13f3e25ca2fdc0bf3d9baad946d8e56bc52d75865a"} Mar 21 09:27:28 crc kubenswrapper[4932]: I0321 09:27:28.416792 4932 generic.go:334] "Generic (PLEG): container finished" podID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerID="f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f" exitCode=0 Mar 21 09:27:28 crc kubenswrapper[4932]: I0321 09:27:28.416891 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qt9n2" event={"ID":"c458ffff-a3c3-47ba-a42c-533e29d29b19","Type":"ContainerDied","Data":"f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f"} Mar 21 09:27:28 crc kubenswrapper[4932]: I0321 09:27:28.702903 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:27:28 crc kubenswrapper[4932]: E0321 09:27:28.703209 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:27:29 crc kubenswrapper[4932]: I0321 09:27:29.430395 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qt9n2" event={"ID":"c458ffff-a3c3-47ba-a42c-533e29d29b19","Type":"ContainerStarted","Data":"9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df"} Mar 21 09:27:29 crc kubenswrapper[4932]: I0321 09:27:29.457101 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qt9n2" podStartSLOduration=3.054662325 podStartE2EDuration="4.457079492s" podCreationTimestamp="2026-03-21 09:27:25 +0000 UTC" firstStartedPulling="2026-03-21 09:27:27.407257264 +0000 UTC m=+1751.002455533" lastFinishedPulling="2026-03-21 09:27:28.809674431 +0000 UTC m=+1752.404872700" observedRunningTime="2026-03-21 09:27:29.446400865 +0000 UTC m=+1753.041599134" watchObservedRunningTime="2026-03-21 09:27:29.457079492 +0000 UTC m=+1753.052277761" Mar 21 09:27:30 crc kubenswrapper[4932]: I0321 09:27:30.703485 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:27:30 crc kubenswrapper[4932]: E0321 09:27:30.704015 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:27:34 crc kubenswrapper[4932]: I0321 09:27:34.703878 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:27:34 crc kubenswrapper[4932]: E0321 09:27:34.704756 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:27:35 crc kubenswrapper[4932]: I0321 09:27:35.939220 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:35 crc kubenswrapper[4932]: I0321 09:27:35.940966 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:35 crc kubenswrapper[4932]: I0321 09:27:35.991564 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:36 crc kubenswrapper[4932]: I0321 09:27:36.550506 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:36 crc kubenswrapper[4932]: I0321 09:27:36.603730 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qt9n2"] Mar 21 09:27:38 crc kubenswrapper[4932]: I0321 09:27:38.515307 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qt9n2" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerName="registry-server" containerID="cri-o://9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df" gracePeriod=2 Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.031026 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.195553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-catalog-content\") pod \"c458ffff-a3c3-47ba-a42c-533e29d29b19\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.195841 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwxn9\" (UniqueName: \"kubernetes.io/projected/c458ffff-a3c3-47ba-a42c-533e29d29b19-kube-api-access-nwxn9\") pod \"c458ffff-a3c3-47ba-a42c-533e29d29b19\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.195945 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-utilities\") pod \"c458ffff-a3c3-47ba-a42c-533e29d29b19\" (UID: \"c458ffff-a3c3-47ba-a42c-533e29d29b19\") " Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.197264 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-utilities" (OuterVolumeSpecName: "utilities") pod "c458ffff-a3c3-47ba-a42c-533e29d29b19" (UID: "c458ffff-a3c3-47ba-a42c-533e29d29b19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.202505 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c458ffff-a3c3-47ba-a42c-533e29d29b19-kube-api-access-nwxn9" (OuterVolumeSpecName: "kube-api-access-nwxn9") pod "c458ffff-a3c3-47ba-a42c-533e29d29b19" (UID: "c458ffff-a3c3-47ba-a42c-533e29d29b19"). InnerVolumeSpecName "kube-api-access-nwxn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.231760 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c458ffff-a3c3-47ba-a42c-533e29d29b19" (UID: "c458ffff-a3c3-47ba-a42c-533e29d29b19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.298407 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwxn9\" (UniqueName: \"kubernetes.io/projected/c458ffff-a3c3-47ba-a42c-533e29d29b19-kube-api-access-nwxn9\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.298443 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.298453 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c458ffff-a3c3-47ba-a42c-533e29d29b19-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.525986 4932 generic.go:334] "Generic (PLEG): container finished" podID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerID="9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df" exitCode=0 Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.526069 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qt9n2" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.526610 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qt9n2" event={"ID":"c458ffff-a3c3-47ba-a42c-533e29d29b19","Type":"ContainerDied","Data":"9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df"} Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.526727 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qt9n2" event={"ID":"c458ffff-a3c3-47ba-a42c-533e29d29b19","Type":"ContainerDied","Data":"56785f511d51101c44231b13f3e25ca2fdc0bf3d9baad946d8e56bc52d75865a"} Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.526780 4932 scope.go:117] "RemoveContainer" containerID="9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.551643 4932 scope.go:117] "RemoveContainer" containerID="f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.557732 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qt9n2"] Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.576430 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qt9n2"] Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.586883 4932 scope.go:117] "RemoveContainer" containerID="5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.631157 4932 scope.go:117] "RemoveContainer" containerID="9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df" Mar 21 09:27:39 crc kubenswrapper[4932]: E0321 09:27:39.631781 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df\": container with ID starting with 9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df not found: ID does not exist" containerID="9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.631835 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df"} err="failed to get container status \"9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df\": rpc error: code = NotFound desc = could not find container \"9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df\": container with ID starting with 9d7448c3743d7c38e92971449c7da06369a33c850f90063579d3832d4fca40df not found: ID does not exist" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.631866 4932 scope.go:117] "RemoveContainer" containerID="f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f" Mar 21 09:27:39 crc kubenswrapper[4932]: E0321 09:27:39.632380 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f\": container with ID starting with f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f not found: ID does not exist" containerID="f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.632428 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f"} err="failed to get container status \"f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f\": rpc error: code = NotFound desc = could not find container \"f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f\": container with ID starting with f21b1788fb9c8ae572fbab2d2bf36f5a02934dab0a7f5446010215984fe9bc6f not found: ID does not exist" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.632456 4932 scope.go:117] "RemoveContainer" containerID="5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030" Mar 21 09:27:39 crc kubenswrapper[4932]: E0321 09:27:39.632758 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030\": container with ID starting with 5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030 not found: ID does not exist" containerID="5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.632858 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030"} err="failed to get container status \"5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030\": rpc error: code = NotFound desc = could not find container \"5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030\": container with ID starting with 5820814a6d188ccce8ab5551ca9b169bf209cefe1cc9744720d62e2f725c9030 not found: ID does not exist" Mar 21 09:27:39 crc kubenswrapper[4932]: I0321 09:27:39.712816 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" path="/var/lib/kubelet/pods/c458ffff-a3c3-47ba-a42c-533e29d29b19/volumes" Mar 21 09:27:42 crc kubenswrapper[4932]: I0321 09:27:42.703044 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:27:42 crc kubenswrapper[4932]: E0321 09:27:42.703882 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.045792 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-f2cbd"] Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.058591 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e324-account-create-update-dwb42"] Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.067951 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-bqqzc"] Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.077916 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a3d8-account-create-update-kh2n5"] Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.088282 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-f2cbd"] Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.098445 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-bqqzc"] Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.107221 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a3d8-account-create-update-kh2n5"] Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.115113 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e324-account-create-update-dwb42"] Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.703379 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:27:43 crc kubenswrapper[4932]: E0321 09:27:43.704028 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.716061 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17" path="/var/lib/kubelet/pods/6c827e78-d37f-4c9c-aa0b-7f7aa0b51a17/volumes" Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.716863 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98addc41-3e50-4aa5-88fd-b6b68dc6c4c9" path="/var/lib/kubelet/pods/98addc41-3e50-4aa5-88fd-b6b68dc6c4c9/volumes" Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.717563 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33ad9d2-3270-4462-bcd2-332f735c1dc7" path="/var/lib/kubelet/pods/b33ad9d2-3270-4462-bcd2-332f735c1dc7/volumes" Mar 21 09:27:43 crc kubenswrapper[4932]: I0321 09:27:43.718224 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a3d247-2ac1-4abe-9372-38b3a73d970c" path="/var/lib/kubelet/pods/b4a3d247-2ac1-4abe-9372-38b3a73d970c/volumes" Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.028593 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-mxvmb"] Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.040463 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9550-account-create-update-x8bqm"] Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.054972 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-cb17-account-create-update-98rqh"] Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.065756 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5g82r"] Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.076240 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-mxvmb"] Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.086148 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5g82r"] Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.096381 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9550-account-create-update-x8bqm"] Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.106802 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-cb17-account-create-update-98rqh"] Mar 21 09:27:46 crc kubenswrapper[4932]: I0321 09:27:46.703243 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:27:46 crc kubenswrapper[4932]: E0321 09:27:46.703515 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:27:47 crc kubenswrapper[4932]: I0321 09:27:47.715340 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041ac472-88ec-4f81-8e91-a0c80ed96a97" path="/var/lib/kubelet/pods/041ac472-88ec-4f81-8e91-a0c80ed96a97/volumes" Mar 21 09:27:47 crc kubenswrapper[4932]: I0321 09:27:47.716801 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c69b6abf-7357-49bd-8f5c-0fe8b1382238" path="/var/lib/kubelet/pods/c69b6abf-7357-49bd-8f5c-0fe8b1382238/volumes" Mar 21 09:27:47 crc kubenswrapper[4932]: I0321 09:27:47.717478 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce0c083c-1e70-4300-b533-4adbba1989e2" path="/var/lib/kubelet/pods/ce0c083c-1e70-4300-b533-4adbba1989e2/volumes" Mar 21 09:27:47 crc kubenswrapper[4932]: I0321 09:27:47.718063 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7962d98-3f35-4702-8035-a15b0b2223c8" path="/var/lib/kubelet/pods/f7962d98-3f35-4702-8035-a15b0b2223c8/volumes" Mar 21 09:27:49 crc kubenswrapper[4932]: I0321 09:27:49.032157 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-b5vrt"] Mar 21 09:27:49 crc kubenswrapper[4932]: I0321 09:27:49.044118 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-b5vrt"] Mar 21 09:27:49 crc kubenswrapper[4932]: I0321 09:27:49.716770 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0e44dc-9908-4a4f-bc21-18d232f11ec6" path="/var/lib/kubelet/pods/1f0e44dc-9908-4a4f-bc21-18d232f11ec6/volumes" Mar 21 09:27:54 crc kubenswrapper[4932]: I0321 09:27:54.702015 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:27:54 crc kubenswrapper[4932]: E0321 09:27:54.702769 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:27:56 crc kubenswrapper[4932]: I0321 09:27:56.702022 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:27:56 crc kubenswrapper[4932]: E0321 09:27:56.702570 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.150157 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568088-qvwwb"] Mar 21 09:28:00 crc kubenswrapper[4932]: E0321 09:28:00.150957 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerName="extract-content" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.150971 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerName="extract-content" Mar 21 09:28:00 crc kubenswrapper[4932]: E0321 09:28:00.151012 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerName="registry-server" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.151018 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerName="registry-server" Mar 21 09:28:00 crc kubenswrapper[4932]: E0321 09:28:00.151052 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerName="extract-utilities" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.151059 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerName="extract-utilities" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.151251 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c458ffff-a3c3-47ba-a42c-533e29d29b19" containerName="registry-server" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.152025 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568088-qvwwb" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.154972 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.156020 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.156483 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.162588 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568088-qvwwb"] Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.241985 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwg4r\" (UniqueName: \"kubernetes.io/projected/9efbd146-0cf9-494d-b174-41faec84bfd7-kube-api-access-mwg4r\") pod \"auto-csr-approver-29568088-qvwwb\" (UID: \"9efbd146-0cf9-494d-b174-41faec84bfd7\") " pod="openshift-infra/auto-csr-approver-29568088-qvwwb" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.344774 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwg4r\" (UniqueName: \"kubernetes.io/projected/9efbd146-0cf9-494d-b174-41faec84bfd7-kube-api-access-mwg4r\") pod \"auto-csr-approver-29568088-qvwwb\" (UID: \"9efbd146-0cf9-494d-b174-41faec84bfd7\") " pod="openshift-infra/auto-csr-approver-29568088-qvwwb" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.368011 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwg4r\" (UniqueName: \"kubernetes.io/projected/9efbd146-0cf9-494d-b174-41faec84bfd7-kube-api-access-mwg4r\") pod \"auto-csr-approver-29568088-qvwwb\" (UID: \"9efbd146-0cf9-494d-b174-41faec84bfd7\") " pod="openshift-infra/auto-csr-approver-29568088-qvwwb" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.473233 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568088-qvwwb" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.702472 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:28:00 crc kubenswrapper[4932]: E0321 09:28:00.702715 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:28:00 crc kubenswrapper[4932]: I0321 09:28:00.893855 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568088-qvwwb"] Mar 21 09:28:01 crc kubenswrapper[4932]: I0321 09:28:01.750936 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568088-qvwwb" event={"ID":"9efbd146-0cf9-494d-b174-41faec84bfd7","Type":"ContainerStarted","Data":"2b0485c89885ca7a6bfbe291e54492ec78909020a8bd04e6f3f4f7401a65b4c7"} Mar 21 09:28:02 crc kubenswrapper[4932]: I0321 09:28:02.755476 4932 generic.go:334] "Generic (PLEG): container finished" podID="9efbd146-0cf9-494d-b174-41faec84bfd7" containerID="eac1cefa6ba5609645d2367655457824f0c8b2ed39e7c64a43fa03eaea86bd70" exitCode=0 Mar 21 09:28:02 crc kubenswrapper[4932]: I0321 09:28:02.755534 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568088-qvwwb" event={"ID":"9efbd146-0cf9-494d-b174-41faec84bfd7","Type":"ContainerDied","Data":"eac1cefa6ba5609645d2367655457824f0c8b2ed39e7c64a43fa03eaea86bd70"} Mar 21 09:28:04 crc kubenswrapper[4932]: I0321 09:28:04.103070 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568088-qvwwb" Mar 21 09:28:04 crc kubenswrapper[4932]: I0321 09:28:04.225014 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwg4r\" (UniqueName: \"kubernetes.io/projected/9efbd146-0cf9-494d-b174-41faec84bfd7-kube-api-access-mwg4r\") pod \"9efbd146-0cf9-494d-b174-41faec84bfd7\" (UID: \"9efbd146-0cf9-494d-b174-41faec84bfd7\") " Mar 21 09:28:04 crc kubenswrapper[4932]: I0321 09:28:04.231326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9efbd146-0cf9-494d-b174-41faec84bfd7-kube-api-access-mwg4r" (OuterVolumeSpecName: "kube-api-access-mwg4r") pod "9efbd146-0cf9-494d-b174-41faec84bfd7" (UID: "9efbd146-0cf9-494d-b174-41faec84bfd7"). InnerVolumeSpecName "kube-api-access-mwg4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:28:04 crc kubenswrapper[4932]: I0321 09:28:04.328407 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwg4r\" (UniqueName: \"kubernetes.io/projected/9efbd146-0cf9-494d-b174-41faec84bfd7-kube-api-access-mwg4r\") on node \"crc\" DevicePath \"\"" Mar 21 09:28:04 crc kubenswrapper[4932]: I0321 09:28:04.775695 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568088-qvwwb" event={"ID":"9efbd146-0cf9-494d-b174-41faec84bfd7","Type":"ContainerDied","Data":"2b0485c89885ca7a6bfbe291e54492ec78909020a8bd04e6f3f4f7401a65b4c7"} Mar 21 09:28:04 crc kubenswrapper[4932]: I0321 09:28:04.775745 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0485c89885ca7a6bfbe291e54492ec78909020a8bd04e6f3f4f7401a65b4c7" Mar 21 09:28:04 crc kubenswrapper[4932]: I0321 09:28:04.775756 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568088-qvwwb" Mar 21 09:28:05 crc kubenswrapper[4932]: I0321 09:28:05.185273 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568082-h84rh"] Mar 21 09:28:05 crc kubenswrapper[4932]: I0321 09:28:05.196753 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568082-h84rh"] Mar 21 09:28:05 crc kubenswrapper[4932]: I0321 09:28:05.716951 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a82b375-1f38-45f9-baed-1410727d4b6f" path="/var/lib/kubelet/pods/8a82b375-1f38-45f9-baed-1410727d4b6f/volumes" Mar 21 09:28:07 crc kubenswrapper[4932]: I0321 09:28:07.710690 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:28:07 crc kubenswrapper[4932]: E0321 09:28:07.711213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:28:08 crc kubenswrapper[4932]: I0321 09:28:08.703161 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:28:08 crc kubenswrapper[4932]: E0321 09:28:08.703740 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:28:11 crc kubenswrapper[4932]: I0321 09:28:11.703156 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:28:11 crc kubenswrapper[4932]: E0321 09:28:11.703721 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:28:17 crc kubenswrapper[4932]: I0321 09:28:17.035165 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-x9r28"] Mar 21 09:28:17 crc kubenswrapper[4932]: I0321 09:28:17.043921 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-x9r28"] Mar 21 09:28:17 crc kubenswrapper[4932]: I0321 09:28:17.717074 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff1f3bd-6d64-4d74-888b-56619d289f45" path="/var/lib/kubelet/pods/eff1f3bd-6d64-4d74-888b-56619d289f45/volumes" Mar 21 09:28:20 crc kubenswrapper[4932]: I0321 09:28:20.703272 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:28:20 crc kubenswrapper[4932]: E0321 09:28:20.703873 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:28:21 crc kubenswrapper[4932]: I0321 09:28:21.702632 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:28:21 crc kubenswrapper[4932]: E0321 09:28:21.702959 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:28:25 crc kubenswrapper[4932]: I0321 09:28:25.703391 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:28:25 crc kubenswrapper[4932]: E0321 09:28:25.704279 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.032949 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rc447"] Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.059400 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b419-account-create-update-jjq88"] Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.072143 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qp2p8"] Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.081668 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b419-account-create-update-jjq88"] Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.091342 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b425-account-create-update-n4dzt"] Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.104578 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rc447"] Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.117668 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b425-account-create-update-n4dzt"] Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.131407 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qp2p8"] Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.375770 4932 scope.go:117] "RemoveContainer" containerID="10c3b20f0ef0400874eb1a7f4971e523a3d1fef15909975d51b9a51247d2ab4a" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.400232 4932 scope.go:117] "RemoveContainer" containerID="0817c6307ba5c2e366515dd0cf717b87035466bc4ab3004424efd207440f58c8" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.453013 4932 scope.go:117] "RemoveContainer" containerID="cea2e5a77eeea003668f8b6d7459b0c59245690c77eb65a614b2200a754abec7" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.507801 4932 scope.go:117] "RemoveContainer" containerID="3e9d12128509faa2a10d7f15e063a5addc1ec79cbb107c2a6e56d447e92cba2c" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.553226 4932 scope.go:117] "RemoveContainer" containerID="6dfe4246b75985df7887f95b45a10964c4ad72834fb7b91a798d97e884f4b070" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.615187 4932 scope.go:117] "RemoveContainer" containerID="3e93259fb62ca4d1bb7ae4c89fc579207149edd5e901ec88f265d0f24d487ae2" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.646186 4932 scope.go:117] "RemoveContainer" containerID="355e91a60a2e5cc87067699084326e157e2fbf4472e0988044c96a16f7ca903c" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.666258 4932 scope.go:117] "RemoveContainer" containerID="c0274b3ff01c1b8a82e4031769a3a5328a4577c2a01c7d652b282387a35fce6b" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.686254 4932 scope.go:117] "RemoveContainer" containerID="e976f03f46fbe39c7266143a7b19dd2ba5d443d838f47bf9fb82a3869f3fd133" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.716550 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4359dfaa-1096-47af-a540-db559c28d15e" path="/var/lib/kubelet/pods/4359dfaa-1096-47af-a540-db559c28d15e/volumes" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.717428 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd399df-1028-4cf8-bf73-307464772e8a" path="/var/lib/kubelet/pods/6cd399df-1028-4cf8-bf73-307464772e8a/volumes" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.717969 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968" path="/var/lib/kubelet/pods/8c9cd2f2-7dd3-4a5f-b2a5-883eb2643968/volumes" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.718603 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa143683-f786-4613-aed7-95a17c40f484" path="/var/lib/kubelet/pods/aa143683-f786-4613-aed7-95a17c40f484/volumes" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.722448 4932 scope.go:117] "RemoveContainer" containerID="e35437e3e0f00f1a0ca0aabc7bd5f9c57a1b253b365db5853947bd556a10cf95" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.742283 4932 scope.go:117] "RemoveContainer" containerID="078fd7e3900ba84c33dc3c709dd94d1060c6a80cd6ae21cd8f790945b13c549f" Mar 21 09:28:29 crc kubenswrapper[4932]: I0321 09:28:29.767267 4932 scope.go:117] "RemoveContainer" containerID="41d9f293f6a7ed39b32f745ab75983e17e7694ed430e13fac7cdc75cb96d1a56" Mar 21 09:28:34 crc kubenswrapper[4932]: I0321 09:28:34.703788 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:28:34 crc kubenswrapper[4932]: E0321 09:28:34.704706 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:28:36 crc kubenswrapper[4932]: I0321 09:28:36.703192 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:28:36 crc kubenswrapper[4932]: E0321 09:28:36.704118 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:28:38 crc kubenswrapper[4932]: I0321 09:28:38.045791 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0698-account-create-update-7r5n2"] Mar 21 09:28:38 crc kubenswrapper[4932]: I0321 09:28:38.055231 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qhcqk"] Mar 21 09:28:38 crc kubenswrapper[4932]: I0321 09:28:38.063659 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rkf9k"] Mar 21 09:28:38 crc kubenswrapper[4932]: I0321 09:28:38.072574 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rkf9k"] Mar 21 09:28:38 crc kubenswrapper[4932]: I0321 09:28:38.079823 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qhcqk"] Mar 21 09:28:38 crc kubenswrapper[4932]: I0321 09:28:38.086910 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0698-account-create-update-7r5n2"] Mar 21 09:28:39 crc kubenswrapper[4932]: I0321 09:28:39.702260 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:28:39 crc kubenswrapper[4932]: E0321 09:28:39.702615 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:28:39 crc kubenswrapper[4932]: I0321 09:28:39.714385 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b62a93b2-b391-4a7f-b430-dd09d30cc6b0" path="/var/lib/kubelet/pods/b62a93b2-b391-4a7f-b430-dd09d30cc6b0/volumes" Mar 21 09:28:39 crc kubenswrapper[4932]: I0321 09:28:39.715313 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b733e0fd-d745-40bf-be43-1b3fdfa9d1ae" path="/var/lib/kubelet/pods/b733e0fd-d745-40bf-be43-1b3fdfa9d1ae/volumes" Mar 21 09:28:39 crc kubenswrapper[4932]: I0321 09:28:39.715931 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c619c392-3f57-4cf2-9f7b-880c8f672365" path="/var/lib/kubelet/pods/c619c392-3f57-4cf2-9f7b-880c8f672365/volumes" Mar 21 09:28:47 crc kubenswrapper[4932]: I0321 09:28:47.708858 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:28:47 crc kubenswrapper[4932]: I0321 09:28:47.709548 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:28:47 crc kubenswrapper[4932]: E0321 09:28:47.709783 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:28:47 crc kubenswrapper[4932]: E0321 09:28:47.709824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:28:50 crc kubenswrapper[4932]: I0321 09:28:50.036671 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-k7vwh"] Mar 21 09:28:50 crc kubenswrapper[4932]: I0321 09:28:50.045533 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-k7vwh"] Mar 21 09:28:51 crc kubenswrapper[4932]: I0321 09:28:51.720500 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd481d0-7b76-4ce4-9b88-4f8d37125f7e" path="/var/lib/kubelet/pods/1fd481d0-7b76-4ce4-9b88-4f8d37125f7e/volumes" Mar 21 09:28:52 crc kubenswrapper[4932]: I0321 09:28:52.703373 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:28:52 crc kubenswrapper[4932]: E0321 09:28:52.704093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:28:58 crc kubenswrapper[4932]: I0321 09:28:58.704079 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:28:58 crc kubenswrapper[4932]: E0321 09:28:58.705289 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:29:02 crc kubenswrapper[4932]: I0321 09:29:02.702454 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:29:02 crc kubenswrapper[4932]: E0321 09:29:02.703255 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:29:04 crc kubenswrapper[4932]: I0321 09:29:04.703267 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:29:04 crc kubenswrapper[4932]: E0321 09:29:04.703605 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:29:13 crc kubenswrapper[4932]: I0321 09:29:13.050252 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-nbzvp"] Mar 21 09:29:13 crc kubenswrapper[4932]: I0321 09:29:13.059601 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-nbzvp"] Mar 21 09:29:13 crc kubenswrapper[4932]: I0321 09:29:13.703467 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:29:13 crc kubenswrapper[4932]: E0321 09:29:13.703714 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:29:13 crc kubenswrapper[4932]: I0321 09:29:13.720197 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faee8175-5928-4824-be91-e0da3c01b71a" path="/var/lib/kubelet/pods/faee8175-5928-4824-be91-e0da3c01b71a/volumes" Mar 21 09:29:16 crc kubenswrapper[4932]: I0321 09:29:16.778983 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:29:16 crc kubenswrapper[4932]: E0321 09:29:16.779957 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:29:18 crc kubenswrapper[4932]: I0321 09:29:18.702849 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:29:18 crc kubenswrapper[4932]: E0321 09:29:18.703469 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:29:20 crc kubenswrapper[4932]: I0321 09:29:20.032469 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cmdkb"] Mar 21 09:29:20 crc kubenswrapper[4932]: I0321 09:29:20.044698 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cmdkb"] Mar 21 09:29:21 crc kubenswrapper[4932]: I0321 09:29:21.713917 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba11bdb-a9d2-414d-b2df-3eaedd97df7e" path="/var/lib/kubelet/pods/bba11bdb-a9d2-414d-b2df-3eaedd97df7e/volumes" Mar 21 09:29:24 crc kubenswrapper[4932]: I0321 09:29:24.702548 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:29:24 crc kubenswrapper[4932]: E0321 09:29:24.703076 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:29:28 crc kubenswrapper[4932]: I0321 09:29:28.030223 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-h2dqh"] Mar 21 09:29:28 crc kubenswrapper[4932]: I0321 09:29:28.039103 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kcpjh"] Mar 21 09:29:28 crc kubenswrapper[4932]: I0321 09:29:28.047856 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-h2dqh"] Mar 21 09:29:28 crc kubenswrapper[4932]: I0321 09:29:28.057188 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kcpjh"] Mar 21 09:29:29 crc kubenswrapper[4932]: I0321 09:29:29.715379 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a" path="/var/lib/kubelet/pods/512ddfd8-6f62-4c46-b3c1-7b0d478e7a5a/volumes" Mar 21 09:29:29 crc kubenswrapper[4932]: I0321 09:29:29.718157 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b" path="/var/lib/kubelet/pods/d4751c71-36b6-4a0c-a5b6-3bdbce6ce42b/volumes" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.001588 4932 scope.go:117] "RemoveContainer" containerID="f40de84196136633e15322532b0a480556d28828673d9290289b0c5abf2989c4" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.027114 4932 scope.go:117] "RemoveContainer" containerID="60a38082968445e049ea86f13a7416ccb1a55b1b20b546ceb19b713d45309991" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.098881 4932 scope.go:117] "RemoveContainer" containerID="650ed92cd749fa114c156d9e7f75efe86217762afc53b9a2718a553a39d2ab9b" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.127788 4932 scope.go:117] "RemoveContainer" containerID="d8dc86c56ad1ecb876a969a396e3fe533c8e69995907fffdadb52b455105f840" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.175469 4932 scope.go:117] "RemoveContainer" containerID="d264d5116d08eca72edf1910e4cc3259535a7f665f84c76ecc73413de419d3e9" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.221537 4932 scope.go:117] "RemoveContainer" containerID="e38075dd4e1927be6cf2db4967c264b8d09c9fb85b43dd8a231e6ae1bdbc15e0" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.291515 4932 scope.go:117] "RemoveContainer" containerID="3c4d336a5bbe2d06203cd10f04946393130b4c9e1ba60f418860e5921a36735b" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.328683 4932 scope.go:117] "RemoveContainer" containerID="8254a2d4464dd1924e3f06796cf51cdc9d33af308e51c06413170350cb08e4b8" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.380493 4932 scope.go:117] "RemoveContainer" containerID="5ed4c0bd6c557eaab65da1a605508b866d54185082a69d6081fd69916ca9d07b" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.414212 4932 scope.go:117] "RemoveContainer" containerID="5e60d67350769822ee75069a6f072b13e1b777adb01689bbe512cf8c0982a267" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.438877 4932 scope.go:117] "RemoveContainer" containerID="3abdbd41efd3138bd887ca71fb22a7f28afc3f1b53fd069611607089e0277fc0" Mar 21 09:29:30 crc kubenswrapper[4932]: I0321 09:29:30.702747 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:29:30 crc kubenswrapper[4932]: E0321 09:29:30.703231 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:29:31 crc kubenswrapper[4932]: I0321 09:29:31.702290 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:29:31 crc kubenswrapper[4932]: E0321 09:29:31.702724 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:29:39 crc kubenswrapper[4932]: I0321 09:29:39.703208 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:29:39 crc kubenswrapper[4932]: E0321 09:29:39.704313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:29:41 crc kubenswrapper[4932]: I0321 09:29:41.062907 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vz4hd"] Mar 21 09:29:41 crc kubenswrapper[4932]: I0321 09:29:41.075167 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vz4hd"] Mar 21 09:29:41 crc kubenswrapper[4932]: I0321 09:29:41.717723 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37076824-e8b6-4b75-aea4-f463d7e50613" path="/var/lib/kubelet/pods/37076824-e8b6-4b75-aea4-f463d7e50613/volumes" Mar 21 09:29:43 crc kubenswrapper[4932]: I0321 09:29:43.702751 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:29:43 crc kubenswrapper[4932]: E0321 09:29:43.704316 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:29:45 crc kubenswrapper[4932]: I0321 09:29:45.702367 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:29:45 crc kubenswrapper[4932]: E0321 09:29:45.702934 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:29:53 crc kubenswrapper[4932]: I0321 09:29:53.703270 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:29:53 crc kubenswrapper[4932]: E0321 09:29:53.704155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:29:55 crc kubenswrapper[4932]: I0321 09:29:55.702712 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:29:55 crc kubenswrapper[4932]: E0321 09:29:55.703440 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:29:58 crc kubenswrapper[4932]: I0321 09:29:58.702996 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:29:58 crc kubenswrapper[4932]: E0321 09:29:58.703860 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.142600 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46"] Mar 21 09:30:00 crc kubenswrapper[4932]: E0321 09:30:00.143036 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efbd146-0cf9-494d-b174-41faec84bfd7" containerName="oc" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.143052 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efbd146-0cf9-494d-b174-41faec84bfd7" containerName="oc" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.143338 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9efbd146-0cf9-494d-b174-41faec84bfd7" containerName="oc" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.144167 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.146605 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.147461 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.153820 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46"] Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.242999 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568090-mmjpz"] Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.243833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fj5\" (UniqueName: \"kubernetes.io/projected/3c09d9da-3112-44ac-a0fb-bba652fc0b97-kube-api-access-w5fj5\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.244009 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c09d9da-3112-44ac-a0fb-bba652fc0b97-secret-volume\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.244059 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c09d9da-3112-44ac-a0fb-bba652fc0b97-config-volume\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.245603 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.250462 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.250718 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.250897 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.274713 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568090-mmjpz"] Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.354288 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c09d9da-3112-44ac-a0fb-bba652fc0b97-secret-volume\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.354385 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c09d9da-3112-44ac-a0fb-bba652fc0b97-config-volume\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.354454 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zmmb\" (UniqueName: \"kubernetes.io/projected/b03852e5-3e01-4aa8-a6bb-06533edf404e-kube-api-access-7zmmb\") pod \"auto-csr-approver-29568090-mmjpz\" (UID: \"b03852e5-3e01-4aa8-a6bb-06533edf404e\") " pod="openshift-infra/auto-csr-approver-29568090-mmjpz" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.354545 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5fj5\" (UniqueName: \"kubernetes.io/projected/3c09d9da-3112-44ac-a0fb-bba652fc0b97-kube-api-access-w5fj5\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.357482 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c09d9da-3112-44ac-a0fb-bba652fc0b97-config-volume\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.391079 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c09d9da-3112-44ac-a0fb-bba652fc0b97-secret-volume\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.393876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5fj5\" (UniqueName: \"kubernetes.io/projected/3c09d9da-3112-44ac-a0fb-bba652fc0b97-kube-api-access-w5fj5\") pod \"collect-profiles-29568090-rhn46\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.466689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zmmb\" (UniqueName: \"kubernetes.io/projected/b03852e5-3e01-4aa8-a6bb-06533edf404e-kube-api-access-7zmmb\") pod \"auto-csr-approver-29568090-mmjpz\" (UID: \"b03852e5-3e01-4aa8-a6bb-06533edf404e\") " pod="openshift-infra/auto-csr-approver-29568090-mmjpz" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.471828 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.505437 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zmmb\" (UniqueName: \"kubernetes.io/projected/b03852e5-3e01-4aa8-a6bb-06533edf404e-kube-api-access-7zmmb\") pod \"auto-csr-approver-29568090-mmjpz\" (UID: \"b03852e5-3e01-4aa8-a6bb-06533edf404e\") " pod="openshift-infra/auto-csr-approver-29568090-mmjpz" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.578287 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" Mar 21 09:30:00 crc kubenswrapper[4932]: I0321 09:30:00.945906 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46"] Mar 21 09:30:01 crc kubenswrapper[4932]: I0321 09:30:01.048172 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568090-mmjpz"] Mar 21 09:30:01 crc kubenswrapper[4932]: W0321 09:30:01.051017 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb03852e5_3e01_4aa8_a6bb_06533edf404e.slice/crio-61ad0e262602b4c322da5b87b319ddeb40f10cfd628915844aaf545c6a094c62 WatchSource:0}: Error finding container 61ad0e262602b4c322da5b87b319ddeb40f10cfd628915844aaf545c6a094c62: Status 404 returned error can't find the container with id 61ad0e262602b4c322da5b87b319ddeb40f10cfd628915844aaf545c6a094c62 Mar 21 09:30:01 crc kubenswrapper[4932]: I0321 09:30:01.054062 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:30:01 crc kubenswrapper[4932]: I0321 09:30:01.185063 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" event={"ID":"b03852e5-3e01-4aa8-a6bb-06533edf404e","Type":"ContainerStarted","Data":"61ad0e262602b4c322da5b87b319ddeb40f10cfd628915844aaf545c6a094c62"} Mar 21 09:30:01 crc kubenswrapper[4932]: I0321 09:30:01.187281 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" event={"ID":"3c09d9da-3112-44ac-a0fb-bba652fc0b97","Type":"ContainerStarted","Data":"f21dd9afa340f895ee4717f693e953103e36479394e8f155a4dc65c3c4f65ba3"} Mar 21 09:30:01 crc kubenswrapper[4932]: I0321 09:30:01.187327 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" event={"ID":"3c09d9da-3112-44ac-a0fb-bba652fc0b97","Type":"ContainerStarted","Data":"adfc3bf5c82f769b20c24659bc59792e01f57dbc00d11870758aa334f3d7f619"} Mar 21 09:30:01 crc kubenswrapper[4932]: I0321 09:30:01.207455 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" podStartSLOduration=1.207429138 podStartE2EDuration="1.207429138s" podCreationTimestamp="2026-03-21 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 09:30:01.202026718 +0000 UTC m=+1904.797224987" watchObservedRunningTime="2026-03-21 09:30:01.207429138 +0000 UTC m=+1904.802627417" Mar 21 09:30:02 crc kubenswrapper[4932]: I0321 09:30:02.198818 4932 generic.go:334] "Generic (PLEG): container finished" podID="3c09d9da-3112-44ac-a0fb-bba652fc0b97" containerID="f21dd9afa340f895ee4717f693e953103e36479394e8f155a4dc65c3c4f65ba3" exitCode=0 Mar 21 09:30:02 crc kubenswrapper[4932]: I0321 09:30:02.198865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" event={"ID":"3c09d9da-3112-44ac-a0fb-bba652fc0b97","Type":"ContainerDied","Data":"f21dd9afa340f895ee4717f693e953103e36479394e8f155a4dc65c3c4f65ba3"} Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.210723 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" event={"ID":"b03852e5-3e01-4aa8-a6bb-06533edf404e","Type":"ContainerStarted","Data":"f1e5fd380ab81023a54c47945d0cf9fd46725b73e099fb762e1b618d634b63b4"} Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.231749 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" podStartSLOduration=1.436535812 podStartE2EDuration="3.23172226s" podCreationTimestamp="2026-03-21 09:30:00 +0000 UTC" firstStartedPulling="2026-03-21 09:30:01.053753897 +0000 UTC m=+1904.648952166" lastFinishedPulling="2026-03-21 09:30:02.848940345 +0000 UTC m=+1906.444138614" observedRunningTime="2026-03-21 09:30:03.223226482 +0000 UTC m=+1906.818424771" watchObservedRunningTime="2026-03-21 09:30:03.23172226 +0000 UTC m=+1906.826920539" Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.537265 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.733819 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5fj5\" (UniqueName: \"kubernetes.io/projected/3c09d9da-3112-44ac-a0fb-bba652fc0b97-kube-api-access-w5fj5\") pod \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.734006 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c09d9da-3112-44ac-a0fb-bba652fc0b97-secret-volume\") pod \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.734198 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c09d9da-3112-44ac-a0fb-bba652fc0b97-config-volume\") pod \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\" (UID: \"3c09d9da-3112-44ac-a0fb-bba652fc0b97\") " Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.734915 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c09d9da-3112-44ac-a0fb-bba652fc0b97-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c09d9da-3112-44ac-a0fb-bba652fc0b97" (UID: "3c09d9da-3112-44ac-a0fb-bba652fc0b97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.739887 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c09d9da-3112-44ac-a0fb-bba652fc0b97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c09d9da-3112-44ac-a0fb-bba652fc0b97" (UID: "3c09d9da-3112-44ac-a0fb-bba652fc0b97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.741841 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c09d9da-3112-44ac-a0fb-bba652fc0b97-kube-api-access-w5fj5" (OuterVolumeSpecName: "kube-api-access-w5fj5") pod "3c09d9da-3112-44ac-a0fb-bba652fc0b97" (UID: "3c09d9da-3112-44ac-a0fb-bba652fc0b97"). InnerVolumeSpecName "kube-api-access-w5fj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.837166 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5fj5\" (UniqueName: \"kubernetes.io/projected/3c09d9da-3112-44ac-a0fb-bba652fc0b97-kube-api-access-w5fj5\") on node \"crc\" DevicePath \"\"" Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.837201 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c09d9da-3112-44ac-a0fb-bba652fc0b97-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 09:30:03 crc kubenswrapper[4932]: I0321 09:30:03.837212 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c09d9da-3112-44ac-a0fb-bba652fc0b97-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 09:30:04 crc kubenswrapper[4932]: I0321 09:30:04.221510 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" event={"ID":"3c09d9da-3112-44ac-a0fb-bba652fc0b97","Type":"ContainerDied","Data":"adfc3bf5c82f769b20c24659bc59792e01f57dbc00d11870758aa334f3d7f619"} Mar 21 09:30:04 crc kubenswrapper[4932]: I0321 09:30:04.221553 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adfc3bf5c82f769b20c24659bc59792e01f57dbc00d11870758aa334f3d7f619" Mar 21 09:30:04 crc kubenswrapper[4932]: I0321 09:30:04.221532 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46" Mar 21 09:30:04 crc kubenswrapper[4932]: I0321 09:30:04.224242 4932 generic.go:334] "Generic (PLEG): container finished" podID="b03852e5-3e01-4aa8-a6bb-06533edf404e" containerID="f1e5fd380ab81023a54c47945d0cf9fd46725b73e099fb762e1b618d634b63b4" exitCode=0 Mar 21 09:30:04 crc kubenswrapper[4932]: I0321 09:30:04.224291 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" event={"ID":"b03852e5-3e01-4aa8-a6bb-06533edf404e","Type":"ContainerDied","Data":"f1e5fd380ab81023a54c47945d0cf9fd46725b73e099fb762e1b618d634b63b4"} Mar 21 09:30:04 crc kubenswrapper[4932]: I0321 09:30:04.702789 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:30:04 crc kubenswrapper[4932]: E0321 09:30:04.703066 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:30:05 crc kubenswrapper[4932]: I0321 09:30:05.575243 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" Mar 21 09:30:05 crc kubenswrapper[4932]: I0321 09:30:05.672126 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zmmb\" (UniqueName: \"kubernetes.io/projected/b03852e5-3e01-4aa8-a6bb-06533edf404e-kube-api-access-7zmmb\") pod \"b03852e5-3e01-4aa8-a6bb-06533edf404e\" (UID: \"b03852e5-3e01-4aa8-a6bb-06533edf404e\") " Mar 21 09:30:05 crc kubenswrapper[4932]: I0321 09:30:05.677760 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03852e5-3e01-4aa8-a6bb-06533edf404e-kube-api-access-7zmmb" (OuterVolumeSpecName: "kube-api-access-7zmmb") pod "b03852e5-3e01-4aa8-a6bb-06533edf404e" (UID: "b03852e5-3e01-4aa8-a6bb-06533edf404e"). InnerVolumeSpecName "kube-api-access-7zmmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:30:05 crc kubenswrapper[4932]: I0321 09:30:05.774561 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zmmb\" (UniqueName: \"kubernetes.io/projected/b03852e5-3e01-4aa8-a6bb-06533edf404e-kube-api-access-7zmmb\") on node \"crc\" DevicePath \"\"" Mar 21 09:30:06 crc kubenswrapper[4932]: I0321 09:30:06.250253 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" event={"ID":"b03852e5-3e01-4aa8-a6bb-06533edf404e","Type":"ContainerDied","Data":"61ad0e262602b4c322da5b87b319ddeb40f10cfd628915844aaf545c6a094c62"} Mar 21 09:30:06 crc kubenswrapper[4932]: I0321 09:30:06.250310 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ad0e262602b4c322da5b87b319ddeb40f10cfd628915844aaf545c6a094c62" Mar 21 09:30:06 crc kubenswrapper[4932]: I0321 09:30:06.250342 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568090-mmjpz" Mar 21 09:30:06 crc kubenswrapper[4932]: I0321 09:30:06.307775 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568084-gfk8p"] Mar 21 09:30:06 crc kubenswrapper[4932]: I0321 09:30:06.320532 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568084-gfk8p"] Mar 21 09:30:07 crc kubenswrapper[4932]: I0321 09:30:07.713743 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5cbc8bb-0dc6-4b75-936d-661f39208aa0" path="/var/lib/kubelet/pods/b5cbc8bb-0dc6-4b75-936d-661f39208aa0/volumes" Mar 21 09:30:09 crc kubenswrapper[4932]: I0321 09:30:09.703228 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:30:09 crc kubenswrapper[4932]: E0321 09:30:09.703853 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:30:12 crc kubenswrapper[4932]: I0321 09:30:12.703144 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:30:12 crc kubenswrapper[4932]: E0321 09:30:12.703854 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:30:15 crc kubenswrapper[4932]: I0321 09:30:15.703423 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:30:15 crc kubenswrapper[4932]: E0321 09:30:15.703980 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:30:24 crc kubenswrapper[4932]: I0321 09:30:24.028795 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-f6c8s"] Mar 21 09:30:24 crc kubenswrapper[4932]: I0321 09:30:24.036087 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-f6c8s"] Mar 21 09:30:24 crc kubenswrapper[4932]: I0321 09:30:24.702448 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:30:24 crc kubenswrapper[4932]: E0321 09:30:24.702976 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.029987 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7481-account-create-update-lt9rh"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.040072 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fa88-account-create-update-xmhz8"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.049285 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3f5-account-create-update-hnwjt"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.056819 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-95ptw"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.063783 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-czdbl"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.070730 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-czdbl"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.079372 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e3f5-account-create-update-hnwjt"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.088190 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7481-account-create-update-lt9rh"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.097233 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fa88-account-create-update-xmhz8"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.105534 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-95ptw"] Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.716689 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3949704b-d11e-41e5-aa84-7df80e97aa8d" path="/var/lib/kubelet/pods/3949704b-d11e-41e5-aa84-7df80e97aa8d/volumes" Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.717712 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e220f5c-e5d4-40b8-9cc0-c650073c3dfa" path="/var/lib/kubelet/pods/3e220f5c-e5d4-40b8-9cc0-c650073c3dfa/volumes" Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.718640 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7528d17d-bb05-4ebe-8e3f-1ad0a8caf995" path="/var/lib/kubelet/pods/7528d17d-bb05-4ebe-8e3f-1ad0a8caf995/volumes" Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.719552 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef3c2b4-8d78-4753-8a9c-b3fca108d873" path="/var/lib/kubelet/pods/7ef3c2b4-8d78-4753-8a9c-b3fca108d873/volumes" Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.721444 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9553f1cd-76cc-4b81-96e4-b5144f08a050" path="/var/lib/kubelet/pods/9553f1cd-76cc-4b81-96e4-b5144f08a050/volumes" Mar 21 09:30:25 crc kubenswrapper[4932]: I0321 09:30:25.722069 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c619be3a-aba3-49f0-83e1-6c89fb5fca43" path="/var/lib/kubelet/pods/c619be3a-aba3-49f0-83e1-6c89fb5fca43/volumes" Mar 21 09:30:27 crc kubenswrapper[4932]: I0321 09:30:27.711409 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:30:27 crc kubenswrapper[4932]: E0321 09:30:27.712185 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:30:29 crc kubenswrapper[4932]: I0321 09:30:29.702734 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:30:29 crc kubenswrapper[4932]: E0321 09:30:29.703183 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:30:30 crc kubenswrapper[4932]: I0321 09:30:30.682524 4932 scope.go:117] "RemoveContainer" containerID="0303fbd4dbfdd4cd49b4c476697682d2de71354978522e8b92c9b504556c76af" Mar 21 09:30:30 crc kubenswrapper[4932]: I0321 09:30:30.725077 4932 scope.go:117] "RemoveContainer" containerID="80103d4519b3e81df4085002d0ada9b777104acb945efe8eb58b2ca3d0a08302" Mar 21 09:30:30 crc kubenswrapper[4932]: I0321 09:30:30.763167 4932 scope.go:117] "RemoveContainer" containerID="932a1c6d409e9692b2789416fb776e48b4ac43bcef769fb3dcf2fd525303f290" Mar 21 09:30:30 crc kubenswrapper[4932]: I0321 09:30:30.811791 4932 scope.go:117] "RemoveContainer" containerID="7e1f3848cb3279719d67772dfc860f20901f4bd99c471c38dc47d4917c90baef" Mar 21 09:30:30 crc kubenswrapper[4932]: I0321 09:30:30.865181 4932 scope.go:117] "RemoveContainer" containerID="1ef68aa312ce21469698dc3310b5b66a48219c39f21d31508c20de6177ea4b36" Mar 21 09:30:30 crc kubenswrapper[4932]: I0321 09:30:30.925319 4932 scope.go:117] "RemoveContainer" containerID="a46ebf1a84e931d7638bdff05c62a8d8b7bbb3453a30a93dcd8914a74a182233" Mar 21 09:30:30 crc kubenswrapper[4932]: I0321 09:30:30.979110 4932 scope.go:117] "RemoveContainer" containerID="e6af1ee035b663c6af79aa9008bdd99565d9c49e660ddabcca66d2858c97c4c1" Mar 21 09:30:31 crc kubenswrapper[4932]: I0321 09:30:31.017318 4932 scope.go:117] "RemoveContainer" containerID="c8afc9ae6a57fda89f75a0cfc85b41a583e1009b760eebeb550b5bcbec71267e" Mar 21 09:30:37 crc kubenswrapper[4932]: I0321 09:30:37.709376 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:30:38 crc kubenswrapper[4932]: I0321 09:30:38.542667 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"5a1ef2d5928d16e67ae974037fc1d10a8f6817dd27e60fc39017edd4864c0de2"} Mar 21 09:30:40 crc kubenswrapper[4932]: I0321 09:30:40.702821 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:30:40 crc kubenswrapper[4932]: E0321 09:30:40.703547 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:30:44 crc kubenswrapper[4932]: I0321 09:30:44.702634 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:30:44 crc kubenswrapper[4932]: E0321 09:30:44.703863 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:30:53 crc kubenswrapper[4932]: I0321 09:30:53.703255 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:30:53 crc kubenswrapper[4932]: E0321 09:30:53.704134 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:30:55 crc kubenswrapper[4932]: I0321 09:30:55.702855 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:30:55 crc kubenswrapper[4932]: E0321 09:30:55.703419 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:31:00 crc kubenswrapper[4932]: I0321 09:31:00.047272 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jtrd4"] Mar 21 09:31:00 crc kubenswrapper[4932]: I0321 09:31:00.058175 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jtrd4"] Mar 21 09:31:01 crc kubenswrapper[4932]: I0321 09:31:01.714819 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c282ef1-ca4b-4eb8-9acb-ed95a92625a4" path="/var/lib/kubelet/pods/3c282ef1-ca4b-4eb8-9acb-ed95a92625a4/volumes" Mar 21 09:31:06 crc kubenswrapper[4932]: I0321 09:31:06.703525 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:31:06 crc kubenswrapper[4932]: E0321 09:31:06.704218 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:31:07 crc kubenswrapper[4932]: I0321 09:31:07.710384 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:31:07 crc kubenswrapper[4932]: E0321 09:31:07.711036 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:31:21 crc kubenswrapper[4932]: I0321 09:31:21.703483 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:31:21 crc kubenswrapper[4932]: I0321 09:31:21.929095 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223"} Mar 21 09:31:22 crc kubenswrapper[4932]: I0321 09:31:22.702156 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:31:22 crc kubenswrapper[4932]: E0321 09:31:22.702429 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:31:25 crc kubenswrapper[4932]: I0321 09:31:25.042447 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pwhrl"] Mar 21 09:31:25 crc kubenswrapper[4932]: I0321 09:31:25.050546 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pwhrl"] Mar 21 09:31:25 crc kubenswrapper[4932]: I0321 09:31:25.719958 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c9966f4-a444-47a1-9394-7d0483967734" path="/var/lib/kubelet/pods/7c9966f4-a444-47a1-9394-7d0483967734/volumes" Mar 21 09:31:27 crc kubenswrapper[4932]: I0321 09:31:27.741886 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:31:27 crc kubenswrapper[4932]: I0321 09:31:27.742284 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:31:29 crc kubenswrapper[4932]: I0321 09:31:29.037326 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-92xlk"] Mar 21 09:31:29 crc kubenswrapper[4932]: I0321 09:31:29.050102 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-92xlk"] Mar 21 09:31:29 crc kubenswrapper[4932]: I0321 09:31:29.715379 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d55e78-bebc-4aa9-9043-a107dce766ab" path="/var/lib/kubelet/pods/98d55e78-bebc-4aa9-9043-a107dce766ab/volumes" Mar 21 09:31:30 crc kubenswrapper[4932]: E0321 09:31:30.180851 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13285608_51c1_4307_a442_e0cd0e881385.slice/crio-conmon-66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13285608_51c1_4307_a442_e0cd0e881385.slice/crio-66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223.scope\": RecentStats: unable to find data in memory cache]" Mar 21 09:31:31 crc kubenswrapper[4932]: I0321 09:31:31.012067 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" exitCode=1 Mar 21 09:31:31 crc kubenswrapper[4932]: I0321 09:31:31.012143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223"} Mar 21 09:31:31 crc kubenswrapper[4932]: I0321 09:31:31.012445 4932 scope.go:117] "RemoveContainer" containerID="c3a58b06c25cbea71640ece88629576085d69ca84569325138eb83668aa58ef9" Mar 21 09:31:31 crc kubenswrapper[4932]: I0321 09:31:31.013240 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:31:31 crc kubenswrapper[4932]: E0321 09:31:31.013641 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:31:31 crc kubenswrapper[4932]: I0321 09:31:31.177708 4932 scope.go:117] "RemoveContainer" containerID="36ef173a5f14536087ad02cd2c4e7c5c85f38eaa9c9c63e3aa0461cf3e6eae93" Mar 21 09:31:31 crc kubenswrapper[4932]: I0321 09:31:31.250473 4932 scope.go:117] "RemoveContainer" containerID="7f9e8bdaca3cd0e46e062a109bef725daec77c2c828df2ff096430f39d58156c" Mar 21 09:31:31 crc kubenswrapper[4932]: I0321 09:31:31.294552 4932 scope.go:117] "RemoveContainer" containerID="b5c105d3d2060015e4efc7f98e2970afdf3451748f675a044452fb9c7c0b0a87" Mar 21 09:31:34 crc kubenswrapper[4932]: I0321 09:31:34.702170 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:31:35 crc kubenswrapper[4932]: I0321 09:31:35.049943 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377"} Mar 21 09:31:37 crc kubenswrapper[4932]: I0321 09:31:37.741271 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:31:37 crc kubenswrapper[4932]: I0321 09:31:37.742256 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:31:37 crc kubenswrapper[4932]: I0321 09:31:37.743632 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:31:37 crc kubenswrapper[4932]: E0321 09:31:37.743937 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:31:37 crc kubenswrapper[4932]: I0321 09:31:37.948534 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:31:37 crc kubenswrapper[4932]: I0321 09:31:37.949097 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:31:44 crc kubenswrapper[4932]: I0321 09:31:44.133389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377"} Mar 21 09:31:44 crc kubenswrapper[4932]: I0321 09:31:44.134378 4932 scope.go:117] "RemoveContainer" containerID="4251e30f160099f12e6d7bf4f298e10407fa10ad74ffe8b4a1b4bdb384379cd0" Mar 21 09:31:44 crc kubenswrapper[4932]: I0321 09:31:44.133471 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" exitCode=1 Mar 21 09:31:44 crc kubenswrapper[4932]: I0321 09:31:44.136034 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:31:44 crc kubenswrapper[4932]: E0321 09:31:44.136677 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:31:47 crc kubenswrapper[4932]: I0321 09:31:47.947735 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:31:47 crc kubenswrapper[4932]: I0321 09:31:47.948060 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:31:47 crc kubenswrapper[4932]: I0321 09:31:47.948972 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:31:47 crc kubenswrapper[4932]: E0321 09:31:47.949194 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:31:49 crc kubenswrapper[4932]: I0321 09:31:49.703027 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:31:49 crc kubenswrapper[4932]: E0321 09:31:49.703540 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.143534 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568092-vcsww"] Mar 21 09:32:00 crc kubenswrapper[4932]: E0321 09:32:00.144687 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c09d9da-3112-44ac-a0fb-bba652fc0b97" containerName="collect-profiles" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.144703 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c09d9da-3112-44ac-a0fb-bba652fc0b97" containerName="collect-profiles" Mar 21 09:32:00 crc kubenswrapper[4932]: E0321 09:32:00.144758 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03852e5-3e01-4aa8-a6bb-06533edf404e" containerName="oc" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.144766 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03852e5-3e01-4aa8-a6bb-06533edf404e" containerName="oc" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.145004 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c09d9da-3112-44ac-a0fb-bba652fc0b97" containerName="collect-profiles" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.145024 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03852e5-3e01-4aa8-a6bb-06533edf404e" containerName="oc" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.145943 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568092-vcsww" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.148860 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.149542 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.149636 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.152917 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568092-vcsww"] Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.158808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjnqf\" (UniqueName: \"kubernetes.io/projected/ed62f384-7497-4500-9102-479493765eb1-kube-api-access-rjnqf\") pod \"auto-csr-approver-29568092-vcsww\" (UID: \"ed62f384-7497-4500-9102-479493765eb1\") " pod="openshift-infra/auto-csr-approver-29568092-vcsww" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.260956 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjnqf\" (UniqueName: \"kubernetes.io/projected/ed62f384-7497-4500-9102-479493765eb1-kube-api-access-rjnqf\") pod \"auto-csr-approver-29568092-vcsww\" (UID: \"ed62f384-7497-4500-9102-479493765eb1\") " pod="openshift-infra/auto-csr-approver-29568092-vcsww" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.281471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjnqf\" (UniqueName: \"kubernetes.io/projected/ed62f384-7497-4500-9102-479493765eb1-kube-api-access-rjnqf\") pod \"auto-csr-approver-29568092-vcsww\" (UID: \"ed62f384-7497-4500-9102-479493765eb1\") " pod="openshift-infra/auto-csr-approver-29568092-vcsww" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.465182 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568092-vcsww" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.702921 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:32:00 crc kubenswrapper[4932]: E0321 09:32:00.703399 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:32:00 crc kubenswrapper[4932]: I0321 09:32:00.904561 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568092-vcsww"] Mar 21 09:32:01 crc kubenswrapper[4932]: I0321 09:32:01.286823 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568092-vcsww" event={"ID":"ed62f384-7497-4500-9102-479493765eb1","Type":"ContainerStarted","Data":"48e33ad41763f5baf0e5424e40d9d33a7ec2c1599f09b87683f6ef44ad8f14be"} Mar 21 09:32:02 crc kubenswrapper[4932]: I0321 09:32:02.301020 4932 generic.go:334] "Generic (PLEG): container finished" podID="ed62f384-7497-4500-9102-479493765eb1" containerID="ad6961882ca7a1d45d65b1d11f94d75cd4e771e3f15a9ae27ae0d44fa68ba64e" exitCode=0 Mar 21 09:32:02 crc kubenswrapper[4932]: I0321 09:32:02.301204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568092-vcsww" event={"ID":"ed62f384-7497-4500-9102-479493765eb1","Type":"ContainerDied","Data":"ad6961882ca7a1d45d65b1d11f94d75cd4e771e3f15a9ae27ae0d44fa68ba64e"} Mar 21 09:32:03 crc kubenswrapper[4932]: I0321 09:32:03.702861 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:32:03 crc kubenswrapper[4932]: E0321 09:32:03.703295 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:32:03 crc kubenswrapper[4932]: I0321 09:32:03.737221 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568092-vcsww" Mar 21 09:32:03 crc kubenswrapper[4932]: I0321 09:32:03.765684 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjnqf\" (UniqueName: \"kubernetes.io/projected/ed62f384-7497-4500-9102-479493765eb1-kube-api-access-rjnqf\") pod \"ed62f384-7497-4500-9102-479493765eb1\" (UID: \"ed62f384-7497-4500-9102-479493765eb1\") " Mar 21 09:32:03 crc kubenswrapper[4932]: I0321 09:32:03.771316 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed62f384-7497-4500-9102-479493765eb1-kube-api-access-rjnqf" (OuterVolumeSpecName: "kube-api-access-rjnqf") pod "ed62f384-7497-4500-9102-479493765eb1" (UID: "ed62f384-7497-4500-9102-479493765eb1"). InnerVolumeSpecName "kube-api-access-rjnqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:32:03 crc kubenswrapper[4932]: I0321 09:32:03.869001 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjnqf\" (UniqueName: \"kubernetes.io/projected/ed62f384-7497-4500-9102-479493765eb1-kube-api-access-rjnqf\") on node \"crc\" DevicePath \"\"" Mar 21 09:32:04 crc kubenswrapper[4932]: I0321 09:32:04.319557 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568092-vcsww" event={"ID":"ed62f384-7497-4500-9102-479493765eb1","Type":"ContainerDied","Data":"48e33ad41763f5baf0e5424e40d9d33a7ec2c1599f09b87683f6ef44ad8f14be"} Mar 21 09:32:04 crc kubenswrapper[4932]: I0321 09:32:04.319888 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e33ad41763f5baf0e5424e40d9d33a7ec2c1599f09b87683f6ef44ad8f14be" Mar 21 09:32:04 crc kubenswrapper[4932]: I0321 09:32:04.319663 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568092-vcsww" Mar 21 09:32:04 crc kubenswrapper[4932]: I0321 09:32:04.802857 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568086-g84hp"] Mar 21 09:32:04 crc kubenswrapper[4932]: I0321 09:32:04.810282 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568086-g84hp"] Mar 21 09:32:05 crc kubenswrapper[4932]: I0321 09:32:05.715072 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d274d9-c8e6-47e4-afc3-93c6023540da" path="/var/lib/kubelet/pods/b0d274d9-c8e6-47e4-afc3-93c6023540da/volumes" Mar 21 09:32:10 crc kubenswrapper[4932]: I0321 09:32:10.028138 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hcvhr"] Mar 21 09:32:10 crc kubenswrapper[4932]: I0321 09:32:10.035979 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hcvhr"] Mar 21 09:32:11 crc kubenswrapper[4932]: I0321 09:32:11.714643 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e696cfc-972e-46a0-bf24-c88ee59fb7e1" path="/var/lib/kubelet/pods/3e696cfc-972e-46a0-bf24-c88ee59fb7e1/volumes" Mar 21 09:32:15 crc kubenswrapper[4932]: I0321 09:32:15.703011 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:32:15 crc kubenswrapper[4932]: E0321 09:32:15.703556 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:32:17 crc kubenswrapper[4932]: I0321 09:32:17.708905 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:32:17 crc kubenswrapper[4932]: E0321 09:32:17.709631 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:32:27 crc kubenswrapper[4932]: I0321 09:32:27.709323 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:32:27 crc kubenswrapper[4932]: E0321 09:32:27.710049 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:32:30 crc kubenswrapper[4932]: I0321 09:32:30.703110 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:32:30 crc kubenswrapper[4932]: E0321 09:32:30.703848 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:32:31 crc kubenswrapper[4932]: I0321 09:32:31.419983 4932 scope.go:117] "RemoveContainer" containerID="cfa2cdea727b3580a314ed7bcf3299c97424946b4a17a8d8149f37101a7c553d" Mar 21 09:32:31 crc kubenswrapper[4932]: I0321 09:32:31.459747 4932 scope.go:117] "RemoveContainer" containerID="48399e95423e690e191a382c6667f1323d264dbc97c9d649df199e1151a23ca4" Mar 21 09:32:38 crc kubenswrapper[4932]: I0321 09:32:38.703302 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:32:38 crc kubenswrapper[4932]: E0321 09:32:38.704819 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:32:45 crc kubenswrapper[4932]: I0321 09:32:45.703286 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:32:45 crc kubenswrapper[4932]: E0321 09:32:45.704247 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:32:51 crc kubenswrapper[4932]: I0321 09:32:51.702491 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:32:51 crc kubenswrapper[4932]: E0321 09:32:51.703478 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:32:59 crc kubenswrapper[4932]: I0321 09:32:59.702654 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:32:59 crc kubenswrapper[4932]: E0321 09:32:59.704166 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:33:00 crc kubenswrapper[4932]: I0321 09:33:00.225588 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:33:00 crc kubenswrapper[4932]: I0321 09:33:00.225669 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:33:03 crc kubenswrapper[4932]: I0321 09:33:03.703796 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:33:03 crc kubenswrapper[4932]: E0321 09:33:03.704283 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:33:11 crc kubenswrapper[4932]: I0321 09:33:11.703795 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:33:11 crc kubenswrapper[4932]: E0321 09:33:11.705135 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:33:17 crc kubenswrapper[4932]: I0321 09:33:17.710649 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:33:17 crc kubenswrapper[4932]: E0321 09:33:17.711846 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:33:24 crc kubenswrapper[4932]: I0321 09:33:24.703860 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:33:24 crc kubenswrapper[4932]: E0321 09:33:24.705336 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:33:30 crc kubenswrapper[4932]: I0321 09:33:30.225888 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:33:30 crc kubenswrapper[4932]: I0321 09:33:30.226877 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:33:31 crc kubenswrapper[4932]: I0321 09:33:31.703827 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:33:31 crc kubenswrapper[4932]: E0321 09:33:31.704994 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:33:39 crc kubenswrapper[4932]: I0321 09:33:39.703015 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:33:39 crc kubenswrapper[4932]: E0321 09:33:39.704004 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:33:44 crc kubenswrapper[4932]: I0321 09:33:44.703542 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:33:44 crc kubenswrapper[4932]: E0321 09:33:44.706238 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:33:52 crc kubenswrapper[4932]: I0321 09:33:52.703214 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:33:52 crc kubenswrapper[4932]: E0321 09:33:52.703948 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:33:56 crc kubenswrapper[4932]: I0321 09:33:56.704063 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:33:56 crc kubenswrapper[4932]: E0321 09:33:56.705429 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.150754 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568094-mgmvn"] Mar 21 09:34:00 crc kubenswrapper[4932]: E0321 09:34:00.151715 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed62f384-7497-4500-9102-479493765eb1" containerName="oc" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.151734 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed62f384-7497-4500-9102-479493765eb1" containerName="oc" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.152018 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed62f384-7497-4500-9102-479493765eb1" containerName="oc" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.152715 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568094-mgmvn" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.155501 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.159944 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568094-mgmvn"] Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.162232 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.162282 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.226261 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.226330 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.226401 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.227442 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a1ef2d5928d16e67ae974037fc1d10a8f6817dd27e60fc39017edd4864c0de2"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.227507 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://5a1ef2d5928d16e67ae974037fc1d10a8f6817dd27e60fc39017edd4864c0de2" gracePeriod=600 Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.231930 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9zc9\" (UniqueName: \"kubernetes.io/projected/9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1-kube-api-access-k9zc9\") pod \"auto-csr-approver-29568094-mgmvn\" (UID: \"9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1\") " pod="openshift-infra/auto-csr-approver-29568094-mgmvn" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.333822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9zc9\" (UniqueName: \"kubernetes.io/projected/9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1-kube-api-access-k9zc9\") pod \"auto-csr-approver-29568094-mgmvn\" (UID: \"9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1\") " pod="openshift-infra/auto-csr-approver-29568094-mgmvn" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.359280 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9zc9\" (UniqueName: \"kubernetes.io/projected/9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1-kube-api-access-k9zc9\") pod \"auto-csr-approver-29568094-mgmvn\" (UID: \"9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1\") " pod="openshift-infra/auto-csr-approver-29568094-mgmvn" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.473538 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568094-mgmvn" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.474061 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="5a1ef2d5928d16e67ae974037fc1d10a8f6817dd27e60fc39017edd4864c0de2" exitCode=0 Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.474103 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"5a1ef2d5928d16e67ae974037fc1d10a8f6817dd27e60fc39017edd4864c0de2"} Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.474139 4932 scope.go:117] "RemoveContainer" containerID="6f3b9ae1459b395a540ff4e8ba09fc07142982386e3308b2aa1b665f65b43966" Mar 21 09:34:00 crc kubenswrapper[4932]: I0321 09:34:00.924259 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568094-mgmvn"] Mar 21 09:34:01 crc kubenswrapper[4932]: I0321 09:34:01.486338 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb"} Mar 21 09:34:01 crc kubenswrapper[4932]: I0321 09:34:01.487980 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568094-mgmvn" event={"ID":"9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1","Type":"ContainerStarted","Data":"a1fb88e51531ce9e2ba0931b6a98c2726b2bf486ed5a6915edcf80d1ebd92444"} Mar 21 09:34:02 crc kubenswrapper[4932]: I0321 09:34:02.498391 4932 generic.go:334] "Generic (PLEG): container finished" podID="9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1" containerID="5ad9f0ba5cca57fa90cd1ddd56e0eef06ea2799488a30b4a42235caa5412f6cd" exitCode=0 Mar 21 09:34:02 crc kubenswrapper[4932]: I0321 09:34:02.498476 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568094-mgmvn" event={"ID":"9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1","Type":"ContainerDied","Data":"5ad9f0ba5cca57fa90cd1ddd56e0eef06ea2799488a30b4a42235caa5412f6cd"} Mar 21 09:34:03 crc kubenswrapper[4932]: I0321 09:34:03.831418 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:34:03 crc kubenswrapper[4932]: E0321 09:34:03.832303 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:34:04 crc kubenswrapper[4932]: I0321 09:34:04.070145 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568094-mgmvn" Mar 21 09:34:04 crc kubenswrapper[4932]: I0321 09:34:04.244343 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9zc9\" (UniqueName: \"kubernetes.io/projected/9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1-kube-api-access-k9zc9\") pod \"9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1\" (UID: \"9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1\") " Mar 21 09:34:04 crc kubenswrapper[4932]: I0321 09:34:04.250986 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1-kube-api-access-k9zc9" (OuterVolumeSpecName: "kube-api-access-k9zc9") pod "9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1" (UID: "9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1"). InnerVolumeSpecName "kube-api-access-k9zc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:34:04 crc kubenswrapper[4932]: I0321 09:34:04.347651 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9zc9\" (UniqueName: \"kubernetes.io/projected/9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1-kube-api-access-k9zc9\") on node \"crc\" DevicePath \"\"" Mar 21 09:34:04 crc kubenswrapper[4932]: I0321 09:34:04.521323 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568094-mgmvn" event={"ID":"9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1","Type":"ContainerDied","Data":"a1fb88e51531ce9e2ba0931b6a98c2726b2bf486ed5a6915edcf80d1ebd92444"} Mar 21 09:34:04 crc kubenswrapper[4932]: I0321 09:34:04.521390 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1fb88e51531ce9e2ba0931b6a98c2726b2bf486ed5a6915edcf80d1ebd92444" Mar 21 09:34:04 crc kubenswrapper[4932]: I0321 09:34:04.521493 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568094-mgmvn" Mar 21 09:34:05 crc kubenswrapper[4932]: I0321 09:34:05.140428 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568088-qvwwb"] Mar 21 09:34:05 crc kubenswrapper[4932]: I0321 09:34:05.147730 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568088-qvwwb"] Mar 21 09:34:05 crc kubenswrapper[4932]: I0321 09:34:05.714448 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9efbd146-0cf9-494d-b174-41faec84bfd7" path="/var/lib/kubelet/pods/9efbd146-0cf9-494d-b174-41faec84bfd7/volumes" Mar 21 09:34:09 crc kubenswrapper[4932]: I0321 09:34:09.703104 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:34:09 crc kubenswrapper[4932]: E0321 09:34:09.704492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:34:18 crc kubenswrapper[4932]: I0321 09:34:18.702786 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:34:18 crc kubenswrapper[4932]: E0321 09:34:18.703376 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:34:22 crc kubenswrapper[4932]: I0321 09:34:22.703953 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:34:22 crc kubenswrapper[4932]: E0321 09:34:22.704868 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:34:29 crc kubenswrapper[4932]: I0321 09:34:29.803249 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ttdb"] Mar 21 09:34:29 crc kubenswrapper[4932]: E0321 09:34:29.804202 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1" containerName="oc" Mar 21 09:34:29 crc kubenswrapper[4932]: I0321 09:34:29.804218 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1" containerName="oc" Mar 21 09:34:29 crc kubenswrapper[4932]: I0321 09:34:29.804454 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1" containerName="oc" Mar 21 09:34:29 crc kubenswrapper[4932]: I0321 09:34:29.805963 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:29 crc kubenswrapper[4932]: I0321 09:34:29.814863 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ttdb"] Mar 21 09:34:29 crc kubenswrapper[4932]: I0321 09:34:29.909149 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4nl5\" (UniqueName: \"kubernetes.io/projected/67a10b83-a0ad-45ef-af1f-e9f78c523559-kube-api-access-v4nl5\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:29 crc kubenswrapper[4932]: I0321 09:34:29.909334 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-catalog-content\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:29 crc kubenswrapper[4932]: I0321 09:34:29.909409 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-utilities\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.011721 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4nl5\" (UniqueName: \"kubernetes.io/projected/67a10b83-a0ad-45ef-af1f-e9f78c523559-kube-api-access-v4nl5\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.011891 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-catalog-content\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.011946 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-utilities\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.012514 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-utilities\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.012518 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-catalog-content\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.035780 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4nl5\" (UniqueName: \"kubernetes.io/projected/67a10b83-a0ad-45ef-af1f-e9f78c523559-kube-api-access-v4nl5\") pod \"redhat-operators-6ttdb\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.130072 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.615213 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ttdb"] Mar 21 09:34:30 crc kubenswrapper[4932]: I0321 09:34:30.762617 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ttdb" event={"ID":"67a10b83-a0ad-45ef-af1f-e9f78c523559","Type":"ContainerStarted","Data":"8fa8e764349e48a1845390973632822016b3c4fff599866365d46380c5e7e9fb"} Mar 21 09:34:31 crc kubenswrapper[4932]: I0321 09:34:31.578279 4932 scope.go:117] "RemoveContainer" containerID="eac1cefa6ba5609645d2367655457824f0c8b2ed39e7c64a43fa03eaea86bd70" Mar 21 09:34:31 crc kubenswrapper[4932]: I0321 09:34:31.702669 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:34:31 crc kubenswrapper[4932]: E0321 09:34:31.702901 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:34:31 crc kubenswrapper[4932]: I0321 09:34:31.773883 4932 generic.go:334] "Generic (PLEG): container finished" podID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerID="4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f" exitCode=0 Mar 21 09:34:31 crc kubenswrapper[4932]: I0321 09:34:31.773972 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ttdb" event={"ID":"67a10b83-a0ad-45ef-af1f-e9f78c523559","Type":"ContainerDied","Data":"4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f"} Mar 21 09:34:32 crc kubenswrapper[4932]: I0321 09:34:32.787075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ttdb" event={"ID":"67a10b83-a0ad-45ef-af1f-e9f78c523559","Type":"ContainerStarted","Data":"4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c"} Mar 21 09:34:33 crc kubenswrapper[4932]: I0321 09:34:33.797461 4932 generic.go:334] "Generic (PLEG): container finished" podID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerID="4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c" exitCode=0 Mar 21 09:34:33 crc kubenswrapper[4932]: I0321 09:34:33.797510 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ttdb" event={"ID":"67a10b83-a0ad-45ef-af1f-e9f78c523559","Type":"ContainerDied","Data":"4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c"} Mar 21 09:34:34 crc kubenswrapper[4932]: I0321 09:34:34.702835 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:34:34 crc kubenswrapper[4932]: E0321 09:34:34.703570 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:34:34 crc kubenswrapper[4932]: I0321 09:34:34.808419 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ttdb" event={"ID":"67a10b83-a0ad-45ef-af1f-e9f78c523559","Type":"ContainerStarted","Data":"eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a"} Mar 21 09:34:34 crc kubenswrapper[4932]: I0321 09:34:34.831487 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ttdb" podStartSLOduration=3.3529106889999998 podStartE2EDuration="5.831462059s" podCreationTimestamp="2026-03-21 09:34:29 +0000 UTC" firstStartedPulling="2026-03-21 09:34:31.776455501 +0000 UTC m=+2175.371653770" lastFinishedPulling="2026-03-21 09:34:34.255006871 +0000 UTC m=+2177.850205140" observedRunningTime="2026-03-21 09:34:34.82569662 +0000 UTC m=+2178.420894889" watchObservedRunningTime="2026-03-21 09:34:34.831462059 +0000 UTC m=+2178.426660328" Mar 21 09:34:40 crc kubenswrapper[4932]: I0321 09:34:40.131204 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:40 crc kubenswrapper[4932]: I0321 09:34:40.132986 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:41 crc kubenswrapper[4932]: I0321 09:34:41.177639 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6ttdb" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="registry-server" probeResult="failure" output=< Mar 21 09:34:41 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 09:34:41 crc kubenswrapper[4932]: > Mar 21 09:34:46 crc kubenswrapper[4932]: I0321 09:34:46.703338 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:34:46 crc kubenswrapper[4932]: E0321 09:34:46.704048 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:34:47 crc kubenswrapper[4932]: I0321 09:34:47.709299 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:34:47 crc kubenswrapper[4932]: E0321 09:34:47.709858 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:34:50 crc kubenswrapper[4932]: I0321 09:34:50.174411 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:50 crc kubenswrapper[4932]: I0321 09:34:50.220634 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:50 crc kubenswrapper[4932]: I0321 09:34:50.421079 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ttdb"] Mar 21 09:34:51 crc kubenswrapper[4932]: I0321 09:34:51.959174 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ttdb" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="registry-server" containerID="cri-o://eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a" gracePeriod=2 Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.445669 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.614698 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4nl5\" (UniqueName: \"kubernetes.io/projected/67a10b83-a0ad-45ef-af1f-e9f78c523559-kube-api-access-v4nl5\") pod \"67a10b83-a0ad-45ef-af1f-e9f78c523559\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.614864 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-utilities\") pod \"67a10b83-a0ad-45ef-af1f-e9f78c523559\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.614976 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-catalog-content\") pod \"67a10b83-a0ad-45ef-af1f-e9f78c523559\" (UID: \"67a10b83-a0ad-45ef-af1f-e9f78c523559\") " Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.616285 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-utilities" (OuterVolumeSpecName: "utilities") pod "67a10b83-a0ad-45ef-af1f-e9f78c523559" (UID: "67a10b83-a0ad-45ef-af1f-e9f78c523559"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.621767 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a10b83-a0ad-45ef-af1f-e9f78c523559-kube-api-access-v4nl5" (OuterVolumeSpecName: "kube-api-access-v4nl5") pod "67a10b83-a0ad-45ef-af1f-e9f78c523559" (UID: "67a10b83-a0ad-45ef-af1f-e9f78c523559"). InnerVolumeSpecName "kube-api-access-v4nl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.717760 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.718140 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4nl5\" (UniqueName: \"kubernetes.io/projected/67a10b83-a0ad-45ef-af1f-e9f78c523559-kube-api-access-v4nl5\") on node \"crc\" DevicePath \"\"" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.745326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67a10b83-a0ad-45ef-af1f-e9f78c523559" (UID: "67a10b83-a0ad-45ef-af1f-e9f78c523559"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.820295 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67a10b83-a0ad-45ef-af1f-e9f78c523559-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.974728 4932 generic.go:334] "Generic (PLEG): container finished" podID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerID="eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a" exitCode=0 Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.974789 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ttdb" event={"ID":"67a10b83-a0ad-45ef-af1f-e9f78c523559","Type":"ContainerDied","Data":"eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a"} Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.974841 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ttdb" event={"ID":"67a10b83-a0ad-45ef-af1f-e9f78c523559","Type":"ContainerDied","Data":"8fa8e764349e48a1845390973632822016b3c4fff599866365d46380c5e7e9fb"} Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.974864 4932 scope.go:117] "RemoveContainer" containerID="eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.974881 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ttdb" Mar 21 09:34:52 crc kubenswrapper[4932]: I0321 09:34:52.996080 4932 scope.go:117] "RemoveContainer" containerID="4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c" Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.030625 4932 scope.go:117] "RemoveContainer" containerID="4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f" Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.031684 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ttdb"] Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.040421 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ttdb"] Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.087844 4932 scope.go:117] "RemoveContainer" containerID="eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a" Mar 21 09:34:53 crc kubenswrapper[4932]: E0321 09:34:53.088910 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a\": container with ID starting with eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a not found: ID does not exist" containerID="eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a" Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.088969 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a"} err="failed to get container status \"eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a\": rpc error: code = NotFound desc = could not find container \"eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a\": container with ID starting with eefc78c66aabde5e76b085be505684c378518c825fb44d7eccd6bdd5d54c8e5a not found: ID does not exist" Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.089003 4932 scope.go:117] "RemoveContainer" containerID="4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c" Mar 21 09:34:53 crc kubenswrapper[4932]: E0321 09:34:53.090435 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c\": container with ID starting with 4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c not found: ID does not exist" containerID="4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c" Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.090484 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c"} err="failed to get container status \"4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c\": rpc error: code = NotFound desc = could not find container \"4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c\": container with ID starting with 4b40651087a20cb43344e6a2bf2ede1232a97323ebb8afd733532adcd0bd053c not found: ID does not exist" Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.090513 4932 scope.go:117] "RemoveContainer" containerID="4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f" Mar 21 09:34:53 crc kubenswrapper[4932]: E0321 09:34:53.094644 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f\": container with ID starting with 4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f not found: ID does not exist" containerID="4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f" Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.094759 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f"} err="failed to get container status \"4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f\": rpc error: code = NotFound desc = could not find container \"4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f\": container with ID starting with 4cd0d2addceec1bd1e1b56add1cd522cf570c1c32c11efe29b595be928c68a9f not found: ID does not exist" Mar 21 09:34:53 crc kubenswrapper[4932]: I0321 09:34:53.721143 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" path="/var/lib/kubelet/pods/67a10b83-a0ad-45ef-af1f-e9f78c523559/volumes" Mar 21 09:34:58 crc kubenswrapper[4932]: I0321 09:34:58.702605 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:34:58 crc kubenswrapper[4932]: E0321 09:34:58.703483 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:35:00 crc kubenswrapper[4932]: I0321 09:35:00.703175 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:35:00 crc kubenswrapper[4932]: E0321 09:35:00.703852 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:35:12 crc kubenswrapper[4932]: I0321 09:35:12.703160 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:35:12 crc kubenswrapper[4932]: E0321 09:35:12.704093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:35:14 crc kubenswrapper[4932]: I0321 09:35:14.702892 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:35:14 crc kubenswrapper[4932]: E0321 09:35:14.703335 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:35:25 crc kubenswrapper[4932]: I0321 09:35:25.703162 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:35:25 crc kubenswrapper[4932]: E0321 09:35:25.704294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:35:27 crc kubenswrapper[4932]: I0321 09:35:27.710570 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:35:27 crc kubenswrapper[4932]: E0321 09:35:27.711874 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:35:39 crc kubenswrapper[4932]: I0321 09:35:39.702926 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:35:39 crc kubenswrapper[4932]: E0321 09:35:39.703996 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:35:42 crc kubenswrapper[4932]: I0321 09:35:42.702665 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:35:42 crc kubenswrapper[4932]: E0321 09:35:42.703209 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:35:54 crc kubenswrapper[4932]: I0321 09:35:54.702413 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:35:54 crc kubenswrapper[4932]: E0321 09:35:54.704225 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:35:57 crc kubenswrapper[4932]: I0321 09:35:57.708474 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:35:57 crc kubenswrapper[4932]: E0321 09:35:57.709155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.150127 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568096-fkvcg"] Mar 21 09:36:00 crc kubenswrapper[4932]: E0321 09:36:00.151245 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="extract-utilities" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.151280 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="extract-utilities" Mar 21 09:36:00 crc kubenswrapper[4932]: E0321 09:36:00.151312 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="extract-content" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.151325 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="extract-content" Mar 21 09:36:00 crc kubenswrapper[4932]: E0321 09:36:00.151337 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="registry-server" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.151374 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="registry-server" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.151701 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a10b83-a0ad-45ef-af1f-e9f78c523559" containerName="registry-server" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.152847 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568096-fkvcg" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.155146 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.155550 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.156832 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.160185 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568096-fkvcg"] Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.225547 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.225619 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.292528 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phhk6\" (UniqueName: \"kubernetes.io/projected/0c301475-0a7e-4509-991e-acbd4f47d23c-kube-api-access-phhk6\") pod \"auto-csr-approver-29568096-fkvcg\" (UID: \"0c301475-0a7e-4509-991e-acbd4f47d23c\") " pod="openshift-infra/auto-csr-approver-29568096-fkvcg" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.395662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phhk6\" (UniqueName: \"kubernetes.io/projected/0c301475-0a7e-4509-991e-acbd4f47d23c-kube-api-access-phhk6\") pod \"auto-csr-approver-29568096-fkvcg\" (UID: \"0c301475-0a7e-4509-991e-acbd4f47d23c\") " pod="openshift-infra/auto-csr-approver-29568096-fkvcg" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.418859 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phhk6\" (UniqueName: \"kubernetes.io/projected/0c301475-0a7e-4509-991e-acbd4f47d23c-kube-api-access-phhk6\") pod \"auto-csr-approver-29568096-fkvcg\" (UID: \"0c301475-0a7e-4509-991e-acbd4f47d23c\") " pod="openshift-infra/auto-csr-approver-29568096-fkvcg" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.471765 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568096-fkvcg" Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.931649 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568096-fkvcg"] Mar 21 09:36:00 crc kubenswrapper[4932]: I0321 09:36:00.939130 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:36:01 crc kubenswrapper[4932]: I0321 09:36:01.469880 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568096-fkvcg" event={"ID":"0c301475-0a7e-4509-991e-acbd4f47d23c","Type":"ContainerStarted","Data":"9d053e168b7a6dae2661ee3ef995092c416ac040a4a38daead651b567f3c4a38"} Mar 21 09:36:02 crc kubenswrapper[4932]: I0321 09:36:02.482229 4932 generic.go:334] "Generic (PLEG): container finished" podID="0c301475-0a7e-4509-991e-acbd4f47d23c" containerID="2d7da2ea6816dad0fa09b62821776e5a11f6fcc3f2c4b6f324a4e806358c1bc6" exitCode=0 Mar 21 09:36:02 crc kubenswrapper[4932]: I0321 09:36:02.482394 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568096-fkvcg" event={"ID":"0c301475-0a7e-4509-991e-acbd4f47d23c","Type":"ContainerDied","Data":"2d7da2ea6816dad0fa09b62821776e5a11f6fcc3f2c4b6f324a4e806358c1bc6"} Mar 21 09:36:03 crc kubenswrapper[4932]: I0321 09:36:03.845512 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568096-fkvcg" Mar 21 09:36:03 crc kubenswrapper[4932]: I0321 09:36:03.970709 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phhk6\" (UniqueName: \"kubernetes.io/projected/0c301475-0a7e-4509-991e-acbd4f47d23c-kube-api-access-phhk6\") pod \"0c301475-0a7e-4509-991e-acbd4f47d23c\" (UID: \"0c301475-0a7e-4509-991e-acbd4f47d23c\") " Mar 21 09:36:03 crc kubenswrapper[4932]: I0321 09:36:03.976594 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c301475-0a7e-4509-991e-acbd4f47d23c-kube-api-access-phhk6" (OuterVolumeSpecName: "kube-api-access-phhk6") pod "0c301475-0a7e-4509-991e-acbd4f47d23c" (UID: "0c301475-0a7e-4509-991e-acbd4f47d23c"). InnerVolumeSpecName "kube-api-access-phhk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:36:04 crc kubenswrapper[4932]: I0321 09:36:04.073264 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phhk6\" (UniqueName: \"kubernetes.io/projected/0c301475-0a7e-4509-991e-acbd4f47d23c-kube-api-access-phhk6\") on node \"crc\" DevicePath \"\"" Mar 21 09:36:04 crc kubenswrapper[4932]: I0321 09:36:04.500477 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568096-fkvcg" event={"ID":"0c301475-0a7e-4509-991e-acbd4f47d23c","Type":"ContainerDied","Data":"9d053e168b7a6dae2661ee3ef995092c416ac040a4a38daead651b567f3c4a38"} Mar 21 09:36:04 crc kubenswrapper[4932]: I0321 09:36:04.500517 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d053e168b7a6dae2661ee3ef995092c416ac040a4a38daead651b567f3c4a38" Mar 21 09:36:04 crc kubenswrapper[4932]: I0321 09:36:04.500541 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568096-fkvcg" Mar 21 09:36:04 crc kubenswrapper[4932]: I0321 09:36:04.918140 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568090-mmjpz"] Mar 21 09:36:04 crc kubenswrapper[4932]: I0321 09:36:04.925501 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568090-mmjpz"] Mar 21 09:36:05 crc kubenswrapper[4932]: I0321 09:36:05.713521 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03852e5-3e01-4aa8-a6bb-06533edf404e" path="/var/lib/kubelet/pods/b03852e5-3e01-4aa8-a6bb-06533edf404e/volumes" Mar 21 09:36:07 crc kubenswrapper[4932]: I0321 09:36:07.708274 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:36:07 crc kubenswrapper[4932]: E0321 09:36:07.708892 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:36:08 crc kubenswrapper[4932]: I0321 09:36:08.703298 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:36:08 crc kubenswrapper[4932]: E0321 09:36:08.704097 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:36:19 crc kubenswrapper[4932]: I0321 09:36:19.702845 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:36:19 crc kubenswrapper[4932]: E0321 09:36:19.703548 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:36:23 crc kubenswrapper[4932]: I0321 09:36:23.702578 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:36:23 crc kubenswrapper[4932]: E0321 09:36:23.703437 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:36:30 crc kubenswrapper[4932]: I0321 09:36:30.226123 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:36:30 crc kubenswrapper[4932]: I0321 09:36:30.226729 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:36:31 crc kubenswrapper[4932]: I0321 09:36:31.694191 4932 scope.go:117] "RemoveContainer" containerID="f1e5fd380ab81023a54c47945d0cf9fd46725b73e099fb762e1b618d634b63b4" Mar 21 09:36:34 crc kubenswrapper[4932]: I0321 09:36:34.702455 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:36:34 crc kubenswrapper[4932]: E0321 09:36:34.703132 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:36:36 crc kubenswrapper[4932]: I0321 09:36:36.702643 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:36:37 crc kubenswrapper[4932]: I0321 09:36:37.276683 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e"} Mar 21 09:36:37 crc kubenswrapper[4932]: I0321 09:36:37.740545 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:36:37 crc kubenswrapper[4932]: I0321 09:36:37.740650 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:36:45 crc kubenswrapper[4932]: I0321 09:36:45.347375 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" exitCode=1 Mar 21 09:36:45 crc kubenswrapper[4932]: I0321 09:36:45.347462 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e"} Mar 21 09:36:45 crc kubenswrapper[4932]: I0321 09:36:45.347913 4932 scope.go:117] "RemoveContainer" containerID="66f8c3620d32242c3eacd12dc228686c8cd763a5412079ed96bc5ad39750d223" Mar 21 09:36:45 crc kubenswrapper[4932]: I0321 09:36:45.348885 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:36:45 crc kubenswrapper[4932]: E0321 09:36:45.349248 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:36:47 crc kubenswrapper[4932]: I0321 09:36:47.740638 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:36:47 crc kubenswrapper[4932]: I0321 09:36:47.741033 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:36:47 crc kubenswrapper[4932]: I0321 09:36:47.741985 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:36:47 crc kubenswrapper[4932]: E0321 09:36:47.742223 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:36:48 crc kubenswrapper[4932]: I0321 09:36:48.703928 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:36:49 crc kubenswrapper[4932]: I0321 09:36:49.407469 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be"} Mar 21 09:36:57 crc kubenswrapper[4932]: I0321 09:36:57.490097 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" exitCode=1 Mar 21 09:36:57 crc kubenswrapper[4932]: I0321 09:36:57.490141 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be"} Mar 21 09:36:57 crc kubenswrapper[4932]: I0321 09:36:57.490712 4932 scope.go:117] "RemoveContainer" containerID="18f9c6fe430265b105553ba8f32bb6046e2b35696bfacca8410ed4b462fea377" Mar 21 09:36:57 crc kubenswrapper[4932]: I0321 09:36:57.491638 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:36:57 crc kubenswrapper[4932]: E0321 09:36:57.492248 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:36:57 crc kubenswrapper[4932]: I0321 09:36:57.947640 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:36:57 crc kubenswrapper[4932]: I0321 09:36:57.948191 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:36:57 crc kubenswrapper[4932]: I0321 09:36:57.948211 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:36:57 crc kubenswrapper[4932]: I0321 09:36:57.948224 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:36:58 crc kubenswrapper[4932]: I0321 09:36:58.503092 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:36:58 crc kubenswrapper[4932]: E0321 09:36:58.503411 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.225730 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.226082 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.226130 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.227127 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.227202 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" gracePeriod=600 Mar 21 09:37:00 crc kubenswrapper[4932]: E0321 09:37:00.353292 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.520277 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" exitCode=0 Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.520330 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb"} Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.520398 4932 scope.go:117] "RemoveContainer" containerID="5a1ef2d5928d16e67ae974037fc1d10a8f6817dd27e60fc39017edd4864c0de2" Mar 21 09:37:00 crc kubenswrapper[4932]: I0321 09:37:00.522134 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:37:00 crc kubenswrapper[4932]: E0321 09:37:00.522782 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:37:01 crc kubenswrapper[4932]: I0321 09:37:01.703156 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:37:01 crc kubenswrapper[4932]: E0321 09:37:01.703683 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:37:12 crc kubenswrapper[4932]: I0321 09:37:12.702862 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:37:12 crc kubenswrapper[4932]: I0321 09:37:12.703518 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:37:12 crc kubenswrapper[4932]: I0321 09:37:12.703638 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:37:12 crc kubenswrapper[4932]: E0321 09:37:12.703676 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:37:12 crc kubenswrapper[4932]: E0321 09:37:12.703746 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:37:12 crc kubenswrapper[4932]: E0321 09:37:12.703956 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:37:22 crc kubenswrapper[4932]: I0321 09:37:22.898373 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rqx8s"] Mar 21 09:37:22 crc kubenswrapper[4932]: E0321 09:37:22.899620 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c301475-0a7e-4509-991e-acbd4f47d23c" containerName="oc" Mar 21 09:37:22 crc kubenswrapper[4932]: I0321 09:37:22.899641 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c301475-0a7e-4509-991e-acbd4f47d23c" containerName="oc" Mar 21 09:37:22 crc kubenswrapper[4932]: I0321 09:37:22.899950 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c301475-0a7e-4509-991e-acbd4f47d23c" containerName="oc" Mar 21 09:37:22 crc kubenswrapper[4932]: I0321 09:37:22.901831 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:22 crc kubenswrapper[4932]: I0321 09:37:22.915279 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqx8s"] Mar 21 09:37:22 crc kubenswrapper[4932]: I0321 09:37:22.983459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p7p\" (UniqueName: \"kubernetes.io/projected/20dc5df1-ff01-4723-b38c-6c0142fd868b-kube-api-access-72p7p\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:22 crc kubenswrapper[4932]: I0321 09:37:22.983803 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-utilities\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:22 crc kubenswrapper[4932]: I0321 09:37:22.983885 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-catalog-content\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.086862 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72p7p\" (UniqueName: \"kubernetes.io/projected/20dc5df1-ff01-4723-b38c-6c0142fd868b-kube-api-access-72p7p\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.086944 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-utilities\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.087037 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-catalog-content\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.087609 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-catalog-content\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.088209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-utilities\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.111965 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72p7p\" (UniqueName: \"kubernetes.io/projected/20dc5df1-ff01-4723-b38c-6c0142fd868b-kube-api-access-72p7p\") pod \"certified-operators-rqx8s\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.233578 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.703840 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:37:23 crc kubenswrapper[4932]: E0321 09:37:23.704388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:37:23 crc kubenswrapper[4932]: I0321 09:37:23.737863 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqx8s"] Mar 21 09:37:24 crc kubenswrapper[4932]: I0321 09:37:24.751194 4932 generic.go:334] "Generic (PLEG): container finished" podID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerID="6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6" exitCode=0 Mar 21 09:37:24 crc kubenswrapper[4932]: I0321 09:37:24.751295 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqx8s" event={"ID":"20dc5df1-ff01-4723-b38c-6c0142fd868b","Type":"ContainerDied","Data":"6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6"} Mar 21 09:37:24 crc kubenswrapper[4932]: I0321 09:37:24.751487 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqx8s" event={"ID":"20dc5df1-ff01-4723-b38c-6c0142fd868b","Type":"ContainerStarted","Data":"4f73d52ccc9c6864196568ff569481ee97a82b1910ba86b4a27c973ef5dee0f6"} Mar 21 09:37:25 crc kubenswrapper[4932]: I0321 09:37:25.702658 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:37:25 crc kubenswrapper[4932]: E0321 09:37:25.703200 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:37:25 crc kubenswrapper[4932]: I0321 09:37:25.771328 4932 generic.go:334] "Generic (PLEG): container finished" podID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerID="b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6" exitCode=0 Mar 21 09:37:25 crc kubenswrapper[4932]: I0321 09:37:25.771401 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqx8s" event={"ID":"20dc5df1-ff01-4723-b38c-6c0142fd868b","Type":"ContainerDied","Data":"b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6"} Mar 21 09:37:26 crc kubenswrapper[4932]: I0321 09:37:26.783243 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqx8s" event={"ID":"20dc5df1-ff01-4723-b38c-6c0142fd868b","Type":"ContainerStarted","Data":"5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46"} Mar 21 09:37:27 crc kubenswrapper[4932]: I0321 09:37:27.710314 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:37:27 crc kubenswrapper[4932]: E0321 09:37:27.711298 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:37:33 crc kubenswrapper[4932]: I0321 09:37:33.234308 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:33 crc kubenswrapper[4932]: I0321 09:37:33.234934 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:33 crc kubenswrapper[4932]: I0321 09:37:33.288577 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:33 crc kubenswrapper[4932]: I0321 09:37:33.314759 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rqx8s" podStartSLOduration=9.9212543 podStartE2EDuration="11.314719653s" podCreationTimestamp="2026-03-21 09:37:22 +0000 UTC" firstStartedPulling="2026-03-21 09:37:24.754095888 +0000 UTC m=+2348.349294157" lastFinishedPulling="2026-03-21 09:37:26.147561231 +0000 UTC m=+2349.742759510" observedRunningTime="2026-03-21 09:37:26.800626173 +0000 UTC m=+2350.395824442" watchObservedRunningTime="2026-03-21 09:37:33.314719653 +0000 UTC m=+2356.909917942" Mar 21 09:37:33 crc kubenswrapper[4932]: I0321 09:37:33.897710 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:33 crc kubenswrapper[4932]: I0321 09:37:33.940905 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqx8s"] Mar 21 09:37:34 crc kubenswrapper[4932]: I0321 09:37:34.703188 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:37:34 crc kubenswrapper[4932]: E0321 09:37:34.703805 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:37:35 crc kubenswrapper[4932]: I0321 09:37:35.864537 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rqx8s" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerName="registry-server" containerID="cri-o://5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46" gracePeriod=2 Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.323693 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.464801 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-catalog-content\") pod \"20dc5df1-ff01-4723-b38c-6c0142fd868b\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.465008 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72p7p\" (UniqueName: \"kubernetes.io/projected/20dc5df1-ff01-4723-b38c-6c0142fd868b-kube-api-access-72p7p\") pod \"20dc5df1-ff01-4723-b38c-6c0142fd868b\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.465183 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-utilities\") pod \"20dc5df1-ff01-4723-b38c-6c0142fd868b\" (UID: \"20dc5df1-ff01-4723-b38c-6c0142fd868b\") " Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.466589 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-utilities" (OuterVolumeSpecName: "utilities") pod "20dc5df1-ff01-4723-b38c-6c0142fd868b" (UID: "20dc5df1-ff01-4723-b38c-6c0142fd868b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.476497 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20dc5df1-ff01-4723-b38c-6c0142fd868b-kube-api-access-72p7p" (OuterVolumeSpecName: "kube-api-access-72p7p") pod "20dc5df1-ff01-4723-b38c-6c0142fd868b" (UID: "20dc5df1-ff01-4723-b38c-6c0142fd868b"). InnerVolumeSpecName "kube-api-access-72p7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.568577 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.568622 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72p7p\" (UniqueName: \"kubernetes.io/projected/20dc5df1-ff01-4723-b38c-6c0142fd868b-kube-api-access-72p7p\") on node \"crc\" DevicePath \"\"" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.879446 4932 generic.go:334] "Generic (PLEG): container finished" podID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerID="5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46" exitCode=0 Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.879497 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqx8s" event={"ID":"20dc5df1-ff01-4723-b38c-6c0142fd868b","Type":"ContainerDied","Data":"5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46"} Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.879528 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqx8s" event={"ID":"20dc5df1-ff01-4723-b38c-6c0142fd868b","Type":"ContainerDied","Data":"4f73d52ccc9c6864196568ff569481ee97a82b1910ba86b4a27c973ef5dee0f6"} Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.879547 4932 scope.go:117] "RemoveContainer" containerID="5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.879710 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqx8s" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.902667 4932 scope.go:117] "RemoveContainer" containerID="b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.928374 4932 scope.go:117] "RemoveContainer" containerID="6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.992044 4932 scope.go:117] "RemoveContainer" containerID="5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46" Mar 21 09:37:36 crc kubenswrapper[4932]: E0321 09:37:36.992730 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46\": container with ID starting with 5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46 not found: ID does not exist" containerID="5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.992772 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46"} err="failed to get container status \"5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46\": rpc error: code = NotFound desc = could not find container \"5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46\": container with ID starting with 5ced2a870fc4b21725c4c523509b0f3ecfa9b8dc9a780930812a4ea75d5c8a46 not found: ID does not exist" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.992799 4932 scope.go:117] "RemoveContainer" containerID="b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6" Mar 21 09:37:36 crc kubenswrapper[4932]: E0321 09:37:36.993596 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6\": container with ID starting with b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6 not found: ID does not exist" containerID="b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.993683 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6"} err="failed to get container status \"b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6\": rpc error: code = NotFound desc = could not find container \"b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6\": container with ID starting with b876ea66327fbf6c6d1c8898b715dcf885b85ef1459c6d1da5a714194f9c9fc6 not found: ID does not exist" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.993732 4932 scope.go:117] "RemoveContainer" containerID="6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6" Mar 21 09:37:36 crc kubenswrapper[4932]: E0321 09:37:36.994442 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6\": container with ID starting with 6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6 not found: ID does not exist" containerID="6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6" Mar 21 09:37:36 crc kubenswrapper[4932]: I0321 09:37:36.994493 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6"} err="failed to get container status \"6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6\": rpc error: code = NotFound desc = could not find container \"6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6\": container with ID starting with 6ac5c0bce251478f4e605d914c1c940a7a1fefd22fa2d940162a17b640c335a6 not found: ID does not exist" Mar 21 09:37:37 crc kubenswrapper[4932]: I0321 09:37:37.050042 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20dc5df1-ff01-4723-b38c-6c0142fd868b" (UID: "20dc5df1-ff01-4723-b38c-6c0142fd868b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:37:37 crc kubenswrapper[4932]: I0321 09:37:37.082431 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20dc5df1-ff01-4723-b38c-6c0142fd868b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:37:37 crc kubenswrapper[4932]: I0321 09:37:37.221790 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqx8s"] Mar 21 09:37:37 crc kubenswrapper[4932]: I0321 09:37:37.232724 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rqx8s"] Mar 21 09:37:37 crc kubenswrapper[4932]: I0321 09:37:37.755258 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" path="/var/lib/kubelet/pods/20dc5df1-ff01-4723-b38c-6c0142fd868b/volumes" Mar 21 09:37:39 crc kubenswrapper[4932]: I0321 09:37:39.702151 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:37:39 crc kubenswrapper[4932]: I0321 09:37:39.702494 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:37:39 crc kubenswrapper[4932]: E0321 09:37:39.702729 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:37:39 crc kubenswrapper[4932]: E0321 09:37:39.702727 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:37:48 crc kubenswrapper[4932]: I0321 09:37:48.702902 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:37:48 crc kubenswrapper[4932]: E0321 09:37:48.703553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:37:50 crc kubenswrapper[4932]: I0321 09:37:50.703731 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:37:50 crc kubenswrapper[4932]: E0321 09:37:50.704416 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:37:51 crc kubenswrapper[4932]: I0321 09:37:51.703241 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:37:51 crc kubenswrapper[4932]: E0321 09:37:51.703651 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.162081 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568098-4p75n"] Mar 21 09:38:00 crc kubenswrapper[4932]: E0321 09:38:00.163078 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerName="extract-content" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.163094 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerName="extract-content" Mar 21 09:38:00 crc kubenswrapper[4932]: E0321 09:38:00.163109 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerName="extract-utilities" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.163116 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerName="extract-utilities" Mar 21 09:38:00 crc kubenswrapper[4932]: E0321 09:38:00.163141 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerName="registry-server" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.163147 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerName="registry-server" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.163388 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="20dc5df1-ff01-4723-b38c-6c0142fd868b" containerName="registry-server" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.164197 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568098-4p75n" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.170962 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.172139 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.172314 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.174279 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568098-4p75n"] Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.187600 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f2tp\" (UniqueName: \"kubernetes.io/projected/2fa69403-a103-48bd-999c-b62ba27c9356-kube-api-access-9f2tp\") pod \"auto-csr-approver-29568098-4p75n\" (UID: \"2fa69403-a103-48bd-999c-b62ba27c9356\") " pod="openshift-infra/auto-csr-approver-29568098-4p75n" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.290390 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f2tp\" (UniqueName: \"kubernetes.io/projected/2fa69403-a103-48bd-999c-b62ba27c9356-kube-api-access-9f2tp\") pod \"auto-csr-approver-29568098-4p75n\" (UID: \"2fa69403-a103-48bd-999c-b62ba27c9356\") " pod="openshift-infra/auto-csr-approver-29568098-4p75n" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.317312 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f2tp\" (UniqueName: \"kubernetes.io/projected/2fa69403-a103-48bd-999c-b62ba27c9356-kube-api-access-9f2tp\") pod \"auto-csr-approver-29568098-4p75n\" (UID: \"2fa69403-a103-48bd-999c-b62ba27c9356\") " pod="openshift-infra/auto-csr-approver-29568098-4p75n" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.490787 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568098-4p75n" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.702873 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:38:00 crc kubenswrapper[4932]: E0321 09:38:00.703667 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:38:00 crc kubenswrapper[4932]: I0321 09:38:00.960145 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568098-4p75n"] Mar 21 09:38:01 crc kubenswrapper[4932]: I0321 09:38:01.106158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568098-4p75n" event={"ID":"2fa69403-a103-48bd-999c-b62ba27c9356","Type":"ContainerStarted","Data":"f8393dc8ef6374392299eb399cc2c3d657559152ca1e8b2beb780dc0efff114d"} Mar 21 09:38:02 crc kubenswrapper[4932]: I0321 09:38:02.125931 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568098-4p75n" event={"ID":"2fa69403-a103-48bd-999c-b62ba27c9356","Type":"ContainerStarted","Data":"aa92ce4ee31291de1297dbbec3074ea1e49e75c1a66b07f769b07abc92f9aaa7"} Mar 21 09:38:02 crc kubenswrapper[4932]: I0321 09:38:02.150467 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568098-4p75n" podStartSLOduration=1.332157956 podStartE2EDuration="2.15044524s" podCreationTimestamp="2026-03-21 09:38:00 +0000 UTC" firstStartedPulling="2026-03-21 09:38:00.967383405 +0000 UTC m=+2384.562581674" lastFinishedPulling="2026-03-21 09:38:01.785670699 +0000 UTC m=+2385.380868958" observedRunningTime="2026-03-21 09:38:02.150188662 +0000 UTC m=+2385.745387021" watchObservedRunningTime="2026-03-21 09:38:02.15044524 +0000 UTC m=+2385.745643509" Mar 21 09:38:03 crc kubenswrapper[4932]: I0321 09:38:03.137301 4932 generic.go:334] "Generic (PLEG): container finished" podID="2fa69403-a103-48bd-999c-b62ba27c9356" containerID="aa92ce4ee31291de1297dbbec3074ea1e49e75c1a66b07f769b07abc92f9aaa7" exitCode=0 Mar 21 09:38:03 crc kubenswrapper[4932]: I0321 09:38:03.137386 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568098-4p75n" event={"ID":"2fa69403-a103-48bd-999c-b62ba27c9356","Type":"ContainerDied","Data":"aa92ce4ee31291de1297dbbec3074ea1e49e75c1a66b07f769b07abc92f9aaa7"} Mar 21 09:38:04 crc kubenswrapper[4932]: I0321 09:38:04.556135 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568098-4p75n" Mar 21 09:38:04 crc kubenswrapper[4932]: I0321 09:38:04.687720 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f2tp\" (UniqueName: \"kubernetes.io/projected/2fa69403-a103-48bd-999c-b62ba27c9356-kube-api-access-9f2tp\") pod \"2fa69403-a103-48bd-999c-b62ba27c9356\" (UID: \"2fa69403-a103-48bd-999c-b62ba27c9356\") " Mar 21 09:38:04 crc kubenswrapper[4932]: I0321 09:38:04.695886 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa69403-a103-48bd-999c-b62ba27c9356-kube-api-access-9f2tp" (OuterVolumeSpecName: "kube-api-access-9f2tp") pod "2fa69403-a103-48bd-999c-b62ba27c9356" (UID: "2fa69403-a103-48bd-999c-b62ba27c9356"). InnerVolumeSpecName "kube-api-access-9f2tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:38:04 crc kubenswrapper[4932]: I0321 09:38:04.703767 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:38:04 crc kubenswrapper[4932]: E0321 09:38:04.704188 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:38:04 crc kubenswrapper[4932]: I0321 09:38:04.704469 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:38:04 crc kubenswrapper[4932]: E0321 09:38:04.704944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:38:04 crc kubenswrapper[4932]: I0321 09:38:04.792193 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f2tp\" (UniqueName: \"kubernetes.io/projected/2fa69403-a103-48bd-999c-b62ba27c9356-kube-api-access-9f2tp\") on node \"crc\" DevicePath \"\"" Mar 21 09:38:05 crc kubenswrapper[4932]: I0321 09:38:05.158719 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568098-4p75n" event={"ID":"2fa69403-a103-48bd-999c-b62ba27c9356","Type":"ContainerDied","Data":"f8393dc8ef6374392299eb399cc2c3d657559152ca1e8b2beb780dc0efff114d"} Mar 21 09:38:05 crc kubenswrapper[4932]: I0321 09:38:05.158802 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8393dc8ef6374392299eb399cc2c3d657559152ca1e8b2beb780dc0efff114d" Mar 21 09:38:05 crc kubenswrapper[4932]: I0321 09:38:05.158828 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568098-4p75n" Mar 21 09:38:05 crc kubenswrapper[4932]: I0321 09:38:05.222503 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568092-vcsww"] Mar 21 09:38:05 crc kubenswrapper[4932]: I0321 09:38:05.231470 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568092-vcsww"] Mar 21 09:38:05 crc kubenswrapper[4932]: I0321 09:38:05.717878 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed62f384-7497-4500-9102-479493765eb1" path="/var/lib/kubelet/pods/ed62f384-7497-4500-9102-479493765eb1/volumes" Mar 21 09:38:12 crc kubenswrapper[4932]: I0321 09:38:12.702671 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:38:12 crc kubenswrapper[4932]: E0321 09:38:12.704131 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:38:15 crc kubenswrapper[4932]: I0321 09:38:15.703006 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:38:15 crc kubenswrapper[4932]: I0321 09:38:15.703387 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:38:15 crc kubenswrapper[4932]: E0321 09:38:15.703576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:38:15 crc kubenswrapper[4932]: E0321 09:38:15.703604 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.618282 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v24q8"] Mar 21 09:38:24 crc kubenswrapper[4932]: E0321 09:38:24.619595 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa69403-a103-48bd-999c-b62ba27c9356" containerName="oc" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.619609 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa69403-a103-48bd-999c-b62ba27c9356" containerName="oc" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.619808 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa69403-a103-48bd-999c-b62ba27c9356" containerName="oc" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.621279 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.629174 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v24q8"] Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.781729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-catalog-content\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.781840 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9m8\" (UniqueName: \"kubernetes.io/projected/3f9af4f7-d990-4d84-b786-3a29bf5701f8-kube-api-access-rh9m8\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.781933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-utilities\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.883705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-catalog-content\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.883783 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh9m8\" (UniqueName: \"kubernetes.io/projected/3f9af4f7-d990-4d84-b786-3a29bf5701f8-kube-api-access-rh9m8\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.883873 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-utilities\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.884223 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-catalog-content\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.884670 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-utilities\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.903275 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh9m8\" (UniqueName: \"kubernetes.io/projected/3f9af4f7-d990-4d84-b786-3a29bf5701f8-kube-api-access-rh9m8\") pod \"redhat-marketplace-v24q8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:24 crc kubenswrapper[4932]: I0321 09:38:24.944286 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:25 crc kubenswrapper[4932]: I0321 09:38:25.467480 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v24q8"] Mar 21 09:38:26 crc kubenswrapper[4932]: I0321 09:38:26.366994 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerID="7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f" exitCode=0 Mar 21 09:38:26 crc kubenswrapper[4932]: I0321 09:38:26.367068 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v24q8" event={"ID":"3f9af4f7-d990-4d84-b786-3a29bf5701f8","Type":"ContainerDied","Data":"7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f"} Mar 21 09:38:26 crc kubenswrapper[4932]: I0321 09:38:26.367273 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v24q8" event={"ID":"3f9af4f7-d990-4d84-b786-3a29bf5701f8","Type":"ContainerStarted","Data":"dae2273a4e7f93c9445811f02b4f0ff912f380779398e829a9092a68ee0bd731"} Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.017616 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r4ct6"] Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.020407 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.034777 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4ct6"] Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.137092 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58lcj\" (UniqueName: \"kubernetes.io/projected/25f13bf5-f2f7-4981-aec4-986f2348fe4d-kube-api-access-58lcj\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.137266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-utilities\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.137295 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-catalog-content\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.239598 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-utilities\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.239651 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-catalog-content\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.239775 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58lcj\" (UniqueName: \"kubernetes.io/projected/25f13bf5-f2f7-4981-aec4-986f2348fe4d-kube-api-access-58lcj\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.240246 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-utilities\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.240267 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-catalog-content\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.266779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58lcj\" (UniqueName: \"kubernetes.io/projected/25f13bf5-f2f7-4981-aec4-986f2348fe4d-kube-api-access-58lcj\") pod \"community-operators-r4ct6\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.349545 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.379771 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v24q8" event={"ID":"3f9af4f7-d990-4d84-b786-3a29bf5701f8","Type":"ContainerStarted","Data":"3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a"} Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.708680 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:38:27 crc kubenswrapper[4932]: E0321 09:38:27.709210 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:38:27 crc kubenswrapper[4932]: I0321 09:38:27.950952 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4ct6"] Mar 21 09:38:27 crc kubenswrapper[4932]: W0321 09:38:27.953930 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f13bf5_f2f7_4981_aec4_986f2348fe4d.slice/crio-900307ffb891fed28a25b849d63146bb7822b277b95bf9245bd9381ee33d3356 WatchSource:0}: Error finding container 900307ffb891fed28a25b849d63146bb7822b277b95bf9245bd9381ee33d3356: Status 404 returned error can't find the container with id 900307ffb891fed28a25b849d63146bb7822b277b95bf9245bd9381ee33d3356 Mar 21 09:38:28 crc kubenswrapper[4932]: I0321 09:38:28.393486 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerID="3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a" exitCode=0 Mar 21 09:38:28 crc kubenswrapper[4932]: I0321 09:38:28.393655 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v24q8" event={"ID":"3f9af4f7-d990-4d84-b786-3a29bf5701f8","Type":"ContainerDied","Data":"3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a"} Mar 21 09:38:28 crc kubenswrapper[4932]: I0321 09:38:28.399029 4932 generic.go:334] "Generic (PLEG): container finished" podID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerID="7c4142dfc0e4435dc82c7889c48eadfacd6b758f1b5c10bb1c87eab665af04a2" exitCode=0 Mar 21 09:38:28 crc kubenswrapper[4932]: I0321 09:38:28.399074 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4ct6" event={"ID":"25f13bf5-f2f7-4981-aec4-986f2348fe4d","Type":"ContainerDied","Data":"7c4142dfc0e4435dc82c7889c48eadfacd6b758f1b5c10bb1c87eab665af04a2"} Mar 21 09:38:28 crc kubenswrapper[4932]: I0321 09:38:28.399103 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4ct6" event={"ID":"25f13bf5-f2f7-4981-aec4-986f2348fe4d","Type":"ContainerStarted","Data":"900307ffb891fed28a25b849d63146bb7822b277b95bf9245bd9381ee33d3356"} Mar 21 09:38:29 crc kubenswrapper[4932]: I0321 09:38:29.414532 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v24q8" event={"ID":"3f9af4f7-d990-4d84-b786-3a29bf5701f8","Type":"ContainerStarted","Data":"d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3"} Mar 21 09:38:29 crc kubenswrapper[4932]: I0321 09:38:29.445093 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v24q8" podStartSLOduration=3.002206283 podStartE2EDuration="5.445069931s" podCreationTimestamp="2026-03-21 09:38:24 +0000 UTC" firstStartedPulling="2026-03-21 09:38:26.368918456 +0000 UTC m=+2409.964116725" lastFinishedPulling="2026-03-21 09:38:28.811782094 +0000 UTC m=+2412.406980373" observedRunningTime="2026-03-21 09:38:29.437661571 +0000 UTC m=+2413.032859840" watchObservedRunningTime="2026-03-21 09:38:29.445069931 +0000 UTC m=+2413.040268220" Mar 21 09:38:29 crc kubenswrapper[4932]: I0321 09:38:29.703154 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:38:29 crc kubenswrapper[4932]: E0321 09:38:29.703766 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:38:30 crc kubenswrapper[4932]: I0321 09:38:30.428708 4932 generic.go:334] "Generic (PLEG): container finished" podID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerID="072a23ced0bf6792d58657b76263fa8b06d7abff0e398f934096b647f45f0879" exitCode=0 Mar 21 09:38:30 crc kubenswrapper[4932]: I0321 09:38:30.428771 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4ct6" event={"ID":"25f13bf5-f2f7-4981-aec4-986f2348fe4d","Type":"ContainerDied","Data":"072a23ced0bf6792d58657b76263fa8b06d7abff0e398f934096b647f45f0879"} Mar 21 09:38:30 crc kubenswrapper[4932]: I0321 09:38:30.703009 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:38:30 crc kubenswrapper[4932]: E0321 09:38:30.703331 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:38:31 crc kubenswrapper[4932]: I0321 09:38:31.447068 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4ct6" event={"ID":"25f13bf5-f2f7-4981-aec4-986f2348fe4d","Type":"ContainerStarted","Data":"2c3bb4208342c54caf36554c23884553792dfd78bbc2ebe78308bc32cf9420f0"} Mar 21 09:38:31 crc kubenswrapper[4932]: I0321 09:38:31.468180 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r4ct6" podStartSLOduration=3.033569735 podStartE2EDuration="5.468158236s" podCreationTimestamp="2026-03-21 09:38:26 +0000 UTC" firstStartedPulling="2026-03-21 09:38:28.402320812 +0000 UTC m=+2411.997519081" lastFinishedPulling="2026-03-21 09:38:30.836909313 +0000 UTC m=+2414.432107582" observedRunningTime="2026-03-21 09:38:31.463918735 +0000 UTC m=+2415.059117004" watchObservedRunningTime="2026-03-21 09:38:31.468158236 +0000 UTC m=+2415.063356505" Mar 21 09:38:31 crc kubenswrapper[4932]: I0321 09:38:31.787936 4932 scope.go:117] "RemoveContainer" containerID="ad6961882ca7a1d45d65b1d11f94d75cd4e771e3f15a9ae27ae0d44fa68ba64e" Mar 21 09:38:34 crc kubenswrapper[4932]: I0321 09:38:34.945415 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:34 crc kubenswrapper[4932]: I0321 09:38:34.945748 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:34 crc kubenswrapper[4932]: I0321 09:38:34.992834 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:35 crc kubenswrapper[4932]: I0321 09:38:35.528168 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:36 crc kubenswrapper[4932]: I0321 09:38:36.608511 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v24q8"] Mar 21 09:38:37 crc kubenswrapper[4932]: I0321 09:38:37.351311 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:37 crc kubenswrapper[4932]: I0321 09:38:37.351452 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:37 crc kubenswrapper[4932]: I0321 09:38:37.397312 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:37 crc kubenswrapper[4932]: I0321 09:38:37.495688 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v24q8" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerName="registry-server" containerID="cri-o://d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3" gracePeriod=2 Mar 21 09:38:37 crc kubenswrapper[4932]: I0321 09:38:37.551295 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.433340 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.508516 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerID="d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3" exitCode=0 Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.508582 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v24q8" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.508600 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v24q8" event={"ID":"3f9af4f7-d990-4d84-b786-3a29bf5701f8","Type":"ContainerDied","Data":"d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3"} Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.508975 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v24q8" event={"ID":"3f9af4f7-d990-4d84-b786-3a29bf5701f8","Type":"ContainerDied","Data":"dae2273a4e7f93c9445811f02b4f0ff912f380779398e829a9092a68ee0bd731"} Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.509002 4932 scope.go:117] "RemoveContainer" containerID="d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.527377 4932 scope.go:117] "RemoveContainer" containerID="3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.549205 4932 scope.go:117] "RemoveContainer" containerID="7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.582334 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-utilities\") pod \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.583225 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-utilities" (OuterVolumeSpecName: "utilities") pod "3f9af4f7-d990-4d84-b786-3a29bf5701f8" (UID: "3f9af4f7-d990-4d84-b786-3a29bf5701f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.583542 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-catalog-content\") pod \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.583574 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh9m8\" (UniqueName: \"kubernetes.io/projected/3f9af4f7-d990-4d84-b786-3a29bf5701f8-kube-api-access-rh9m8\") pod \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\" (UID: \"3f9af4f7-d990-4d84-b786-3a29bf5701f8\") " Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.585540 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.589973 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9af4f7-d990-4d84-b786-3a29bf5701f8-kube-api-access-rh9m8" (OuterVolumeSpecName: "kube-api-access-rh9m8") pod "3f9af4f7-d990-4d84-b786-3a29bf5701f8" (UID: "3f9af4f7-d990-4d84-b786-3a29bf5701f8"). InnerVolumeSpecName "kube-api-access-rh9m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.599608 4932 scope.go:117] "RemoveContainer" containerID="d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3" Mar 21 09:38:38 crc kubenswrapper[4932]: E0321 09:38:38.600145 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3\": container with ID starting with d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3 not found: ID does not exist" containerID="d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.600198 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3"} err="failed to get container status \"d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3\": rpc error: code = NotFound desc = could not find container \"d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3\": container with ID starting with d4b647b9aea9a49595b6f9ca231143457865f71c95e4a1868fc1111cf6f64da3 not found: ID does not exist" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.600231 4932 scope.go:117] "RemoveContainer" containerID="3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a" Mar 21 09:38:38 crc kubenswrapper[4932]: E0321 09:38:38.600786 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a\": container with ID starting with 3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a not found: ID does not exist" containerID="3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.600831 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a"} err="failed to get container status \"3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a\": rpc error: code = NotFound desc = could not find container \"3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a\": container with ID starting with 3ee875e035fa055850551347d816da3223072577105e62911ea4a25ff155681a not found: ID does not exist" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.600929 4932 scope.go:117] "RemoveContainer" containerID="7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f" Mar 21 09:38:38 crc kubenswrapper[4932]: E0321 09:38:38.601309 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f\": container with ID starting with 7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f not found: ID does not exist" containerID="7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.601346 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f"} err="failed to get container status \"7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f\": rpc error: code = NotFound desc = could not find container \"7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f\": container with ID starting with 7a40cc042195eb742a22607ea76e8fefda3cdd887d9de5b9eddd18aa3685d53f not found: ID does not exist" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.612858 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f9af4f7-d990-4d84-b786-3a29bf5701f8" (UID: "3f9af4f7-d990-4d84-b786-3a29bf5701f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.687775 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9af4f7-d990-4d84-b786-3a29bf5701f8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.688036 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh9m8\" (UniqueName: \"kubernetes.io/projected/3f9af4f7-d990-4d84-b786-3a29bf5701f8-kube-api-access-rh9m8\") on node \"crc\" DevicePath \"\"" Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.842030 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v24q8"] Mar 21 09:38:38 crc kubenswrapper[4932]: I0321 09:38:38.849970 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v24q8"] Mar 21 09:38:39 crc kubenswrapper[4932]: I0321 09:38:39.703281 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:38:39 crc kubenswrapper[4932]: E0321 09:38:39.703544 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:38:39 crc kubenswrapper[4932]: I0321 09:38:39.713971 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" path="/var/lib/kubelet/pods/3f9af4f7-d990-4d84-b786-3a29bf5701f8/volumes" Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.208337 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4ct6"] Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.209231 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r4ct6" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerName="registry-server" containerID="cri-o://2c3bb4208342c54caf36554c23884553792dfd78bbc2ebe78308bc32cf9420f0" gracePeriod=2 Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.552620 4932 generic.go:334] "Generic (PLEG): container finished" podID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerID="2c3bb4208342c54caf36554c23884553792dfd78bbc2ebe78308bc32cf9420f0" exitCode=0 Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.552726 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4ct6" event={"ID":"25f13bf5-f2f7-4981-aec4-986f2348fe4d","Type":"ContainerDied","Data":"2c3bb4208342c54caf36554c23884553792dfd78bbc2ebe78308bc32cf9420f0"} Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.648102 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.770846 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-catalog-content\") pod \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.771007 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-utilities\") pod \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.771068 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58lcj\" (UniqueName: \"kubernetes.io/projected/25f13bf5-f2f7-4981-aec4-986f2348fe4d-kube-api-access-58lcj\") pod \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\" (UID: \"25f13bf5-f2f7-4981-aec4-986f2348fe4d\") " Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.772094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-utilities" (OuterVolumeSpecName: "utilities") pod "25f13bf5-f2f7-4981-aec4-986f2348fe4d" (UID: "25f13bf5-f2f7-4981-aec4-986f2348fe4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.778232 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f13bf5-f2f7-4981-aec4-986f2348fe4d-kube-api-access-58lcj" (OuterVolumeSpecName: "kube-api-access-58lcj") pod "25f13bf5-f2f7-4981-aec4-986f2348fe4d" (UID: "25f13bf5-f2f7-4981-aec4-986f2348fe4d"). InnerVolumeSpecName "kube-api-access-58lcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.818890 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25f13bf5-f2f7-4981-aec4-986f2348fe4d" (UID: "25f13bf5-f2f7-4981-aec4-986f2348fe4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.874330 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.874638 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f13bf5-f2f7-4981-aec4-986f2348fe4d-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:38:42 crc kubenswrapper[4932]: I0321 09:38:42.874737 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58lcj\" (UniqueName: \"kubernetes.io/projected/25f13bf5-f2f7-4981-aec4-986f2348fe4d-kube-api-access-58lcj\") on node \"crc\" DevicePath \"\"" Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.564567 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4ct6" event={"ID":"25f13bf5-f2f7-4981-aec4-986f2348fe4d","Type":"ContainerDied","Data":"900307ffb891fed28a25b849d63146bb7822b277b95bf9245bd9381ee33d3356"} Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.564621 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4ct6" Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.564932 4932 scope.go:117] "RemoveContainer" containerID="2c3bb4208342c54caf36554c23884553792dfd78bbc2ebe78308bc32cf9420f0" Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.590422 4932 scope.go:117] "RemoveContainer" containerID="072a23ced0bf6792d58657b76263fa8b06d7abff0e398f934096b647f45f0879" Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.600636 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4ct6"] Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.609625 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r4ct6"] Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.621490 4932 scope.go:117] "RemoveContainer" containerID="7c4142dfc0e4435dc82c7889c48eadfacd6b758f1b5c10bb1c87eab665af04a2" Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.703121 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:38:43 crc kubenswrapper[4932]: E0321 09:38:43.703311 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.703458 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:38:43 crc kubenswrapper[4932]: E0321 09:38:43.703740 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:38:43 crc kubenswrapper[4932]: I0321 09:38:43.720380 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" path="/var/lib/kubelet/pods/25f13bf5-f2f7-4981-aec4-986f2348fe4d/volumes" Mar 21 09:38:53 crc kubenswrapper[4932]: I0321 09:38:53.703433 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:38:53 crc kubenswrapper[4932]: E0321 09:38:53.704178 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:38:54 crc kubenswrapper[4932]: I0321 09:38:54.703598 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:38:54 crc kubenswrapper[4932]: E0321 09:38:54.704418 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:38:57 crc kubenswrapper[4932]: I0321 09:38:57.709895 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:38:57 crc kubenswrapper[4932]: E0321 09:38:57.710490 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:39:05 crc kubenswrapper[4932]: I0321 09:39:05.702763 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:39:05 crc kubenswrapper[4932]: I0321 09:39:05.703292 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:39:05 crc kubenswrapper[4932]: E0321 09:39:05.703507 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:39:05 crc kubenswrapper[4932]: E0321 09:39:05.703535 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:39:12 crc kubenswrapper[4932]: I0321 09:39:12.702303 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:39:12 crc kubenswrapper[4932]: E0321 09:39:12.704502 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:39:19 crc kubenswrapper[4932]: I0321 09:39:19.702801 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:39:19 crc kubenswrapper[4932]: E0321 09:39:19.704079 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:39:20 crc kubenswrapper[4932]: I0321 09:39:20.702113 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:39:20 crc kubenswrapper[4932]: E0321 09:39:20.702799 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:39:23 crc kubenswrapper[4932]: I0321 09:39:23.703199 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:39:23 crc kubenswrapper[4932]: E0321 09:39:23.703837 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:39:30 crc kubenswrapper[4932]: I0321 09:39:30.703009 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:39:30 crc kubenswrapper[4932]: E0321 09:39:30.703763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:39:32 crc kubenswrapper[4932]: I0321 09:39:32.702464 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:39:32 crc kubenswrapper[4932]: E0321 09:39:32.703389 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:39:35 crc kubenswrapper[4932]: I0321 09:39:35.702392 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:39:35 crc kubenswrapper[4932]: E0321 09:39:35.702966 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:39:41 crc kubenswrapper[4932]: I0321 09:39:41.703728 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:39:41 crc kubenswrapper[4932]: E0321 09:39:41.704646 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:39:45 crc kubenswrapper[4932]: I0321 09:39:45.702866 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:39:45 crc kubenswrapper[4932]: E0321 09:39:45.703431 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:39:49 crc kubenswrapper[4932]: I0321 09:39:49.703036 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:39:49 crc kubenswrapper[4932]: E0321 09:39:49.703988 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:39:53 crc kubenswrapper[4932]: I0321 09:39:53.702913 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:39:53 crc kubenswrapper[4932]: E0321 09:39:53.703644 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:39:57 crc kubenswrapper[4932]: I0321 09:39:57.709332 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:39:57 crc kubenswrapper[4932]: E0321 09:39:57.709936 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.148855 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568100-d98vn"] Mar 21 09:40:00 crc kubenswrapper[4932]: E0321 09:40:00.149576 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerName="extract-content" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.149591 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerName="extract-content" Mar 21 09:40:00 crc kubenswrapper[4932]: E0321 09:40:00.149604 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerName="registry-server" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.149610 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerName="registry-server" Mar 21 09:40:00 crc kubenswrapper[4932]: E0321 09:40:00.149623 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerName="extract-utilities" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.149631 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerName="extract-utilities" Mar 21 09:40:00 crc kubenswrapper[4932]: E0321 09:40:00.149656 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerName="extract-content" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.149665 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerName="extract-content" Mar 21 09:40:00 crc kubenswrapper[4932]: E0321 09:40:00.149677 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerName="registry-server" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.149685 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerName="registry-server" Mar 21 09:40:00 crc kubenswrapper[4932]: E0321 09:40:00.149704 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerName="extract-utilities" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.149712 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerName="extract-utilities" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.149930 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f13bf5-f2f7-4981-aec4-986f2348fe4d" containerName="registry-server" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.149948 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9af4f7-d990-4d84-b786-3a29bf5701f8" containerName="registry-server" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.150795 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568100-d98vn" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.156011 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.156029 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.157964 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.165990 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568100-d98vn"] Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.270815 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l8k4\" (UniqueName: \"kubernetes.io/projected/000d6da3-2273-4977-9052-c5e9cdbdf740-kube-api-access-4l8k4\") pod \"auto-csr-approver-29568100-d98vn\" (UID: \"000d6da3-2273-4977-9052-c5e9cdbdf740\") " pod="openshift-infra/auto-csr-approver-29568100-d98vn" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.373241 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l8k4\" (UniqueName: \"kubernetes.io/projected/000d6da3-2273-4977-9052-c5e9cdbdf740-kube-api-access-4l8k4\") pod \"auto-csr-approver-29568100-d98vn\" (UID: \"000d6da3-2273-4977-9052-c5e9cdbdf740\") " pod="openshift-infra/auto-csr-approver-29568100-d98vn" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.397935 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l8k4\" (UniqueName: \"kubernetes.io/projected/000d6da3-2273-4977-9052-c5e9cdbdf740-kube-api-access-4l8k4\") pod \"auto-csr-approver-29568100-d98vn\" (UID: \"000d6da3-2273-4977-9052-c5e9cdbdf740\") " pod="openshift-infra/auto-csr-approver-29568100-d98vn" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.472085 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568100-d98vn" Mar 21 09:40:00 crc kubenswrapper[4932]: I0321 09:40:00.974118 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568100-d98vn"] Mar 21 09:40:01 crc kubenswrapper[4932]: I0321 09:40:01.275751 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568100-d98vn" event={"ID":"000d6da3-2273-4977-9052-c5e9cdbdf740","Type":"ContainerStarted","Data":"80e6f9a68948a1a1345c63739032d62d57375c5988fa9d27ac93fcca5f1ddb45"} Mar 21 09:40:03 crc kubenswrapper[4932]: I0321 09:40:03.295052 4932 generic.go:334] "Generic (PLEG): container finished" podID="000d6da3-2273-4977-9052-c5e9cdbdf740" containerID="19657af7ab466fe8c10169c1f2a013fa45ecd52529fd1833addea2147d2be227" exitCode=0 Mar 21 09:40:03 crc kubenswrapper[4932]: I0321 09:40:03.295166 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568100-d98vn" event={"ID":"000d6da3-2273-4977-9052-c5e9cdbdf740","Type":"ContainerDied","Data":"19657af7ab466fe8c10169c1f2a013fa45ecd52529fd1833addea2147d2be227"} Mar 21 09:40:03 crc kubenswrapper[4932]: I0321 09:40:03.703554 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:40:03 crc kubenswrapper[4932]: E0321 09:40:03.703819 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:40:04 crc kubenswrapper[4932]: I0321 09:40:04.646821 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568100-d98vn" Mar 21 09:40:04 crc kubenswrapper[4932]: I0321 09:40:04.770984 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l8k4\" (UniqueName: \"kubernetes.io/projected/000d6da3-2273-4977-9052-c5e9cdbdf740-kube-api-access-4l8k4\") pod \"000d6da3-2273-4977-9052-c5e9cdbdf740\" (UID: \"000d6da3-2273-4977-9052-c5e9cdbdf740\") " Mar 21 09:40:04 crc kubenswrapper[4932]: I0321 09:40:04.779687 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000d6da3-2273-4977-9052-c5e9cdbdf740-kube-api-access-4l8k4" (OuterVolumeSpecName: "kube-api-access-4l8k4") pod "000d6da3-2273-4977-9052-c5e9cdbdf740" (UID: "000d6da3-2273-4977-9052-c5e9cdbdf740"). InnerVolumeSpecName "kube-api-access-4l8k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:40:04 crc kubenswrapper[4932]: I0321 09:40:04.874914 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l8k4\" (UniqueName: \"kubernetes.io/projected/000d6da3-2273-4977-9052-c5e9cdbdf740-kube-api-access-4l8k4\") on node \"crc\" DevicePath \"\"" Mar 21 09:40:05 crc kubenswrapper[4932]: I0321 09:40:05.311837 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568100-d98vn" event={"ID":"000d6da3-2273-4977-9052-c5e9cdbdf740","Type":"ContainerDied","Data":"80e6f9a68948a1a1345c63739032d62d57375c5988fa9d27ac93fcca5f1ddb45"} Mar 21 09:40:05 crc kubenswrapper[4932]: I0321 09:40:05.311893 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e6f9a68948a1a1345c63739032d62d57375c5988fa9d27ac93fcca5f1ddb45" Mar 21 09:40:05 crc kubenswrapper[4932]: I0321 09:40:05.311912 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568100-d98vn" Mar 21 09:40:05 crc kubenswrapper[4932]: I0321 09:40:05.728262 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568094-mgmvn"] Mar 21 09:40:05 crc kubenswrapper[4932]: I0321 09:40:05.739756 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568094-mgmvn"] Mar 21 09:40:06 crc kubenswrapper[4932]: I0321 09:40:06.703505 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:40:06 crc kubenswrapper[4932]: E0321 09:40:06.704406 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:40:07 crc kubenswrapper[4932]: I0321 09:40:07.716213 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1" path="/var/lib/kubelet/pods/9f8cffc8-c35c-4cc8-9e67-15f7981d8fd1/volumes" Mar 21 09:40:09 crc kubenswrapper[4932]: I0321 09:40:09.703136 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:40:09 crc kubenswrapper[4932]: E0321 09:40:09.703594 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:40:18 crc kubenswrapper[4932]: I0321 09:40:18.703434 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:40:18 crc kubenswrapper[4932]: E0321 09:40:18.704137 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:40:20 crc kubenswrapper[4932]: I0321 09:40:20.704969 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:40:20 crc kubenswrapper[4932]: E0321 09:40:20.705653 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:40:21 crc kubenswrapper[4932]: I0321 09:40:21.703226 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:40:21 crc kubenswrapper[4932]: E0321 09:40:21.703594 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:40:31 crc kubenswrapper[4932]: I0321 09:40:31.702487 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:40:31 crc kubenswrapper[4932]: E0321 09:40:31.703839 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:40:31 crc kubenswrapper[4932]: I0321 09:40:31.912633 4932 scope.go:117] "RemoveContainer" containerID="5ad9f0ba5cca57fa90cd1ddd56e0eef06ea2799488a30b4a42235caa5412f6cd" Mar 21 09:40:32 crc kubenswrapper[4932]: I0321 09:40:32.702942 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:40:32 crc kubenswrapper[4932]: I0321 09:40:32.703359 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:40:32 crc kubenswrapper[4932]: E0321 09:40:32.703716 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:40:32 crc kubenswrapper[4932]: E0321 09:40:32.703856 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:40:43 crc kubenswrapper[4932]: I0321 09:40:43.702938 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:40:43 crc kubenswrapper[4932]: E0321 09:40:43.703851 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:40:44 crc kubenswrapper[4932]: I0321 09:40:44.703462 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:40:44 crc kubenswrapper[4932]: E0321 09:40:44.703732 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:40:47 crc kubenswrapper[4932]: I0321 09:40:47.708908 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:40:47 crc kubenswrapper[4932]: E0321 09:40:47.709729 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:40:56 crc kubenswrapper[4932]: I0321 09:40:56.702817 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:40:56 crc kubenswrapper[4932]: E0321 09:40:56.703750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:40:57 crc kubenswrapper[4932]: I0321 09:40:57.710712 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:40:57 crc kubenswrapper[4932]: E0321 09:40:57.711190 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:40:59 crc kubenswrapper[4932]: I0321 09:40:59.703294 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:40:59 crc kubenswrapper[4932]: E0321 09:40:59.703879 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:41:09 crc kubenswrapper[4932]: I0321 09:41:09.702423 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:41:09 crc kubenswrapper[4932]: E0321 09:41:09.703790 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:41:10 crc kubenswrapper[4932]: I0321 09:41:10.702814 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:41:10 crc kubenswrapper[4932]: E0321 09:41:10.703085 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:41:11 crc kubenswrapper[4932]: I0321 09:41:11.702971 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:41:11 crc kubenswrapper[4932]: E0321 09:41:11.703522 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:41:21 crc kubenswrapper[4932]: I0321 09:41:21.702791 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:41:21 crc kubenswrapper[4932]: E0321 09:41:21.703475 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:41:23 crc kubenswrapper[4932]: I0321 09:41:23.702700 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:41:23 crc kubenswrapper[4932]: E0321 09:41:23.703242 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:41:25 crc kubenswrapper[4932]: I0321 09:41:25.702669 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:41:25 crc kubenswrapper[4932]: E0321 09:41:25.703126 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:41:33 crc kubenswrapper[4932]: I0321 09:41:33.703729 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:41:33 crc kubenswrapper[4932]: E0321 09:41:33.704961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:41:35 crc kubenswrapper[4932]: I0321 09:41:35.703726 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:41:35 crc kubenswrapper[4932]: E0321 09:41:35.704710 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:41:39 crc kubenswrapper[4932]: I0321 09:41:39.704123 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:41:39 crc kubenswrapper[4932]: E0321 09:41:39.705248 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:41:46 crc kubenswrapper[4932]: I0321 09:41:46.702764 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:41:46 crc kubenswrapper[4932]: E0321 09:41:46.703471 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:41:47 crc kubenswrapper[4932]: I0321 09:41:47.709463 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:41:47 crc kubenswrapper[4932]: E0321 09:41:47.710060 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:41:51 crc kubenswrapper[4932]: I0321 09:41:51.704400 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:41:52 crc kubenswrapper[4932]: I0321 09:41:52.301483 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034"} Mar 21 09:41:57 crc kubenswrapper[4932]: I0321 09:41:57.721218 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:41:57 crc kubenswrapper[4932]: I0321 09:41:57.740293 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:41:57 crc kubenswrapper[4932]: I0321 09:41:57.740341 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:41:58 crc kubenswrapper[4932]: I0321 09:41:58.359330 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0"} Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.147687 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568102-gbcxq"] Mar 21 09:42:00 crc kubenswrapper[4932]: E0321 09:42:00.148760 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000d6da3-2273-4977-9052-c5e9cdbdf740" containerName="oc" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.148779 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="000d6da3-2273-4977-9052-c5e9cdbdf740" containerName="oc" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.149005 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="000d6da3-2273-4977-9052-c5e9cdbdf740" containerName="oc" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.149906 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568102-gbcxq" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.152398 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.152621 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.152658 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.159309 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568102-gbcxq"] Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.214822 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbw7s\" (UniqueName: \"kubernetes.io/projected/2108c3f0-7756-4dec-bb8d-065b25538683-kube-api-access-fbw7s\") pod \"auto-csr-approver-29568102-gbcxq\" (UID: \"2108c3f0-7756-4dec-bb8d-065b25538683\") " pod="openshift-infra/auto-csr-approver-29568102-gbcxq" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.317795 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbw7s\" (UniqueName: \"kubernetes.io/projected/2108c3f0-7756-4dec-bb8d-065b25538683-kube-api-access-fbw7s\") pod \"auto-csr-approver-29568102-gbcxq\" (UID: \"2108c3f0-7756-4dec-bb8d-065b25538683\") " pod="openshift-infra/auto-csr-approver-29568102-gbcxq" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.337420 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbw7s\" (UniqueName: \"kubernetes.io/projected/2108c3f0-7756-4dec-bb8d-065b25538683-kube-api-access-fbw7s\") pod \"auto-csr-approver-29568102-gbcxq\" (UID: \"2108c3f0-7756-4dec-bb8d-065b25538683\") " pod="openshift-infra/auto-csr-approver-29568102-gbcxq" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.377967 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" exitCode=1 Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.378011 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034"} Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.378059 4932 scope.go:117] "RemoveContainer" containerID="d07afad9ce19badd2562ca663ae19699da36398951d42987cc1215815f5b855e" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.378988 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:42:00 crc kubenswrapper[4932]: E0321 09:42:00.379415 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:42:00 crc kubenswrapper[4932]: I0321 09:42:00.469802 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568102-gbcxq" Mar 21 09:42:01 crc kubenswrapper[4932]: W0321 09:42:01.033221 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2108c3f0_7756_4dec_bb8d_065b25538683.slice/crio-4b7f65ddc8bea0129036095d329e18d76d72875fde98cfcdbe0b50319d8c0d61 WatchSource:0}: Error finding container 4b7f65ddc8bea0129036095d329e18d76d72875fde98cfcdbe0b50319d8c0d61: Status 404 returned error can't find the container with id 4b7f65ddc8bea0129036095d329e18d76d72875fde98cfcdbe0b50319d8c0d61 Mar 21 09:42:01 crc kubenswrapper[4932]: I0321 09:42:01.033452 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568102-gbcxq"] Mar 21 09:42:01 crc kubenswrapper[4932]: I0321 09:42:01.036303 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:42:01 crc kubenswrapper[4932]: I0321 09:42:01.387023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568102-gbcxq" event={"ID":"2108c3f0-7756-4dec-bb8d-065b25538683","Type":"ContainerStarted","Data":"4b7f65ddc8bea0129036095d329e18d76d72875fde98cfcdbe0b50319d8c0d61"} Mar 21 09:42:02 crc kubenswrapper[4932]: I0321 09:42:02.702704 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:42:03 crc kubenswrapper[4932]: I0321 09:42:03.414543 4932 generic.go:334] "Generic (PLEG): container finished" podID="2108c3f0-7756-4dec-bb8d-065b25538683" containerID="739baa3e2e114c0aba6486c3f239b0279a1729cda930fe7c49e3e10630d35668" exitCode=0 Mar 21 09:42:03 crc kubenswrapper[4932]: I0321 09:42:03.414972 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568102-gbcxq" event={"ID":"2108c3f0-7756-4dec-bb8d-065b25538683","Type":"ContainerDied","Data":"739baa3e2e114c0aba6486c3f239b0279a1729cda930fe7c49e3e10630d35668"} Mar 21 09:42:03 crc kubenswrapper[4932]: I0321 09:42:03.420998 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"fb920de1a494d2a671ce4ad301f4c34f2752b4a5b457b8ced984003f2c7dee4e"} Mar 21 09:42:04 crc kubenswrapper[4932]: I0321 09:42:04.784209 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568102-gbcxq" Mar 21 09:42:04 crc kubenswrapper[4932]: I0321 09:42:04.920886 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbw7s\" (UniqueName: \"kubernetes.io/projected/2108c3f0-7756-4dec-bb8d-065b25538683-kube-api-access-fbw7s\") pod \"2108c3f0-7756-4dec-bb8d-065b25538683\" (UID: \"2108c3f0-7756-4dec-bb8d-065b25538683\") " Mar 21 09:42:04 crc kubenswrapper[4932]: I0321 09:42:04.933115 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2108c3f0-7756-4dec-bb8d-065b25538683-kube-api-access-fbw7s" (OuterVolumeSpecName: "kube-api-access-fbw7s") pod "2108c3f0-7756-4dec-bb8d-065b25538683" (UID: "2108c3f0-7756-4dec-bb8d-065b25538683"). InnerVolumeSpecName "kube-api-access-fbw7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:42:05 crc kubenswrapper[4932]: I0321 09:42:05.023437 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbw7s\" (UniqueName: \"kubernetes.io/projected/2108c3f0-7756-4dec-bb8d-065b25538683-kube-api-access-fbw7s\") on node \"crc\" DevicePath \"\"" Mar 21 09:42:05 crc kubenswrapper[4932]: I0321 09:42:05.442881 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568102-gbcxq" event={"ID":"2108c3f0-7756-4dec-bb8d-065b25538683","Type":"ContainerDied","Data":"4b7f65ddc8bea0129036095d329e18d76d72875fde98cfcdbe0b50319d8c0d61"} Mar 21 09:42:05 crc kubenswrapper[4932]: I0321 09:42:05.443113 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7f65ddc8bea0129036095d329e18d76d72875fde98cfcdbe0b50319d8c0d61" Mar 21 09:42:05 crc kubenswrapper[4932]: I0321 09:42:05.442939 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568102-gbcxq" Mar 21 09:42:05 crc kubenswrapper[4932]: I0321 09:42:05.860391 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568096-fkvcg"] Mar 21 09:42:05 crc kubenswrapper[4932]: I0321 09:42:05.869070 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568096-fkvcg"] Mar 21 09:42:06 crc kubenswrapper[4932]: I0321 09:42:06.453475 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" exitCode=1 Mar 21 09:42:06 crc kubenswrapper[4932]: I0321 09:42:06.453525 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0"} Mar 21 09:42:06 crc kubenswrapper[4932]: I0321 09:42:06.453617 4932 scope.go:117] "RemoveContainer" containerID="afa570b29f89f0ec8c5c7318083c325a2ad47205ca262273ebcbc4d2fc5c29be" Mar 21 09:42:06 crc kubenswrapper[4932]: I0321 09:42:06.454638 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:42:06 crc kubenswrapper[4932]: E0321 09:42:06.455133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.715331 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c301475-0a7e-4509-991e-acbd4f47d23c" path="/var/lib/kubelet/pods/0c301475-0a7e-4509-991e-acbd4f47d23c/volumes" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.740648 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.740780 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.741618 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:42:07 crc kubenswrapper[4932]: E0321 09:42:07.741940 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.948271 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.948334 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.948373 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.948389 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:42:07 crc kubenswrapper[4932]: I0321 09:42:07.949253 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:42:07 crc kubenswrapper[4932]: E0321 09:42:07.949488 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:42:08 crc kubenswrapper[4932]: I0321 09:42:08.474114 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:42:08 crc kubenswrapper[4932]: E0321 09:42:08.474658 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:42:21 crc kubenswrapper[4932]: I0321 09:42:21.703451 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:42:21 crc kubenswrapper[4932]: E0321 09:42:21.704234 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:42:22 crc kubenswrapper[4932]: I0321 09:42:22.703229 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:42:22 crc kubenswrapper[4932]: E0321 09:42:22.703870 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:42:32 crc kubenswrapper[4932]: I0321 09:42:32.027520 4932 scope.go:117] "RemoveContainer" containerID="2d7da2ea6816dad0fa09b62821776e5a11f6fcc3f2c4b6f324a4e806358c1bc6" Mar 21 09:42:32 crc kubenswrapper[4932]: I0321 09:42:32.703366 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:42:32 crc kubenswrapper[4932]: E0321 09:42:32.704216 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:42:33 crc kubenswrapper[4932]: I0321 09:42:33.702566 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:42:33 crc kubenswrapper[4932]: E0321 09:42:33.703059 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:42:43 crc kubenswrapper[4932]: I0321 09:42:43.702896 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:42:43 crc kubenswrapper[4932]: E0321 09:42:43.703873 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:42:45 crc kubenswrapper[4932]: I0321 09:42:45.702576 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:42:45 crc kubenswrapper[4932]: E0321 09:42:45.703141 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:42:56 crc kubenswrapper[4932]: I0321 09:42:56.702433 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:42:56 crc kubenswrapper[4932]: E0321 09:42:56.703451 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:43:00 crc kubenswrapper[4932]: I0321 09:43:00.703017 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:43:00 crc kubenswrapper[4932]: E0321 09:43:00.703730 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:43:07 crc kubenswrapper[4932]: I0321 09:43:07.708175 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:43:07 crc kubenswrapper[4932]: E0321 09:43:07.708950 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:43:14 crc kubenswrapper[4932]: I0321 09:43:14.703095 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:43:14 crc kubenswrapper[4932]: E0321 09:43:14.703925 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:43:22 crc kubenswrapper[4932]: I0321 09:43:22.703298 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:43:22 crc kubenswrapper[4932]: E0321 09:43:22.704649 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:43:27 crc kubenswrapper[4932]: I0321 09:43:27.708167 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:43:27 crc kubenswrapper[4932]: E0321 09:43:27.708896 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:43:34 crc kubenswrapper[4932]: I0321 09:43:34.702236 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:43:34 crc kubenswrapper[4932]: E0321 09:43:34.703040 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:43:41 crc kubenswrapper[4932]: I0321 09:43:41.703286 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:43:41 crc kubenswrapper[4932]: E0321 09:43:41.703921 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:43:48 crc kubenswrapper[4932]: I0321 09:43:48.702839 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:43:48 crc kubenswrapper[4932]: E0321 09:43:48.703478 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:43:53 crc kubenswrapper[4932]: I0321 09:43:53.702985 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:43:53 crc kubenswrapper[4932]: E0321 09:43:53.704288 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:43:59 crc kubenswrapper[4932]: I0321 09:43:59.703216 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:43:59 crc kubenswrapper[4932]: E0321 09:43:59.704006 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.146901 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568104-g5g8q"] Mar 21 09:44:00 crc kubenswrapper[4932]: E0321 09:44:00.147870 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2108c3f0-7756-4dec-bb8d-065b25538683" containerName="oc" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.147912 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2108c3f0-7756-4dec-bb8d-065b25538683" containerName="oc" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.148410 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2108c3f0-7756-4dec-bb8d-065b25538683" containerName="oc" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.149645 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568104-g5g8q" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.152609 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.152866 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.153679 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.158444 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568104-g5g8q"] Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.223619 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-952lv\" (UniqueName: \"kubernetes.io/projected/f28e067d-b0ec-47c6-ad04-0c0d08dde712-kube-api-access-952lv\") pod \"auto-csr-approver-29568104-g5g8q\" (UID: \"f28e067d-b0ec-47c6-ad04-0c0d08dde712\") " pod="openshift-infra/auto-csr-approver-29568104-g5g8q" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.326384 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-952lv\" (UniqueName: \"kubernetes.io/projected/f28e067d-b0ec-47c6-ad04-0c0d08dde712-kube-api-access-952lv\") pod \"auto-csr-approver-29568104-g5g8q\" (UID: \"f28e067d-b0ec-47c6-ad04-0c0d08dde712\") " pod="openshift-infra/auto-csr-approver-29568104-g5g8q" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.352798 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-952lv\" (UniqueName: \"kubernetes.io/projected/f28e067d-b0ec-47c6-ad04-0c0d08dde712-kube-api-access-952lv\") pod \"auto-csr-approver-29568104-g5g8q\" (UID: \"f28e067d-b0ec-47c6-ad04-0c0d08dde712\") " pod="openshift-infra/auto-csr-approver-29568104-g5g8q" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.476603 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568104-g5g8q" Mar 21 09:44:00 crc kubenswrapper[4932]: I0321 09:44:00.913196 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568104-g5g8q"] Mar 21 09:44:01 crc kubenswrapper[4932]: I0321 09:44:01.857295 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568104-g5g8q" event={"ID":"f28e067d-b0ec-47c6-ad04-0c0d08dde712","Type":"ContainerStarted","Data":"6bafa1f49cfda134de142386a392eced64579ec46d1af56798c0fbad892551db"} Mar 21 09:44:03 crc kubenswrapper[4932]: I0321 09:44:03.878526 4932 generic.go:334] "Generic (PLEG): container finished" podID="f28e067d-b0ec-47c6-ad04-0c0d08dde712" containerID="a54b60d43e5a05d9803a41fdddb63fab05767eec9bf679c712cf278c3dbbaf01" exitCode=0 Mar 21 09:44:03 crc kubenswrapper[4932]: I0321 09:44:03.878649 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568104-g5g8q" event={"ID":"f28e067d-b0ec-47c6-ad04-0c0d08dde712","Type":"ContainerDied","Data":"a54b60d43e5a05d9803a41fdddb63fab05767eec9bf679c712cf278c3dbbaf01"} Mar 21 09:44:05 crc kubenswrapper[4932]: I0321 09:44:05.252271 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568104-g5g8q" Mar 21 09:44:05 crc kubenswrapper[4932]: I0321 09:44:05.332229 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-952lv\" (UniqueName: \"kubernetes.io/projected/f28e067d-b0ec-47c6-ad04-0c0d08dde712-kube-api-access-952lv\") pod \"f28e067d-b0ec-47c6-ad04-0c0d08dde712\" (UID: \"f28e067d-b0ec-47c6-ad04-0c0d08dde712\") " Mar 21 09:44:05 crc kubenswrapper[4932]: I0321 09:44:05.342263 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28e067d-b0ec-47c6-ad04-0c0d08dde712-kube-api-access-952lv" (OuterVolumeSpecName: "kube-api-access-952lv") pod "f28e067d-b0ec-47c6-ad04-0c0d08dde712" (UID: "f28e067d-b0ec-47c6-ad04-0c0d08dde712"). InnerVolumeSpecName "kube-api-access-952lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:44:05 crc kubenswrapper[4932]: I0321 09:44:05.436117 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-952lv\" (UniqueName: \"kubernetes.io/projected/f28e067d-b0ec-47c6-ad04-0c0d08dde712-kube-api-access-952lv\") on node \"crc\" DevicePath \"\"" Mar 21 09:44:05 crc kubenswrapper[4932]: I0321 09:44:05.899414 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568104-g5g8q" event={"ID":"f28e067d-b0ec-47c6-ad04-0c0d08dde712","Type":"ContainerDied","Data":"6bafa1f49cfda134de142386a392eced64579ec46d1af56798c0fbad892551db"} Mar 21 09:44:05 crc kubenswrapper[4932]: I0321 09:44:05.899461 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bafa1f49cfda134de142386a392eced64579ec46d1af56798c0fbad892551db" Mar 21 09:44:05 crc kubenswrapper[4932]: I0321 09:44:05.899507 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568104-g5g8q" Mar 21 09:44:06 crc kubenswrapper[4932]: I0321 09:44:06.328634 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568098-4p75n"] Mar 21 09:44:06 crc kubenswrapper[4932]: I0321 09:44:06.336853 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568098-4p75n"] Mar 21 09:44:07 crc kubenswrapper[4932]: I0321 09:44:07.716451 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa69403-a103-48bd-999c-b62ba27c9356" path="/var/lib/kubelet/pods/2fa69403-a103-48bd-999c-b62ba27c9356/volumes" Mar 21 09:44:08 crc kubenswrapper[4932]: I0321 09:44:08.703181 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:44:08 crc kubenswrapper[4932]: E0321 09:44:08.703784 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:44:14 crc kubenswrapper[4932]: I0321 09:44:14.703574 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:44:14 crc kubenswrapper[4932]: E0321 09:44:14.704741 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:44:22 crc kubenswrapper[4932]: I0321 09:44:22.701841 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:44:22 crc kubenswrapper[4932]: E0321 09:44:22.703159 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:44:26 crc kubenswrapper[4932]: I0321 09:44:26.702390 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:44:26 crc kubenswrapper[4932]: E0321 09:44:26.703037 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:44:30 crc kubenswrapper[4932]: I0321 09:44:30.226322 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:44:30 crc kubenswrapper[4932]: I0321 09:44:30.227029 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:44:32 crc kubenswrapper[4932]: I0321 09:44:32.127626 4932 scope.go:117] "RemoveContainer" containerID="aa92ce4ee31291de1297dbbec3074ea1e49e75c1a66b07f769b07abc92f9aaa7" Mar 21 09:44:37 crc kubenswrapper[4932]: I0321 09:44:37.709528 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:44:37 crc kubenswrapper[4932]: E0321 09:44:37.711629 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:44:40 crc kubenswrapper[4932]: I0321 09:44:40.702992 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:44:40 crc kubenswrapper[4932]: E0321 09:44:40.703793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:44:49 crc kubenswrapper[4932]: I0321 09:44:49.702999 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:44:49 crc kubenswrapper[4932]: E0321 09:44:49.703614 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:44:55 crc kubenswrapper[4932]: I0321 09:44:55.702634 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:44:55 crc kubenswrapper[4932]: E0321 09:44:55.703595 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.155688 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z"] Mar 21 09:45:00 crc kubenswrapper[4932]: E0321 09:45:00.156765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28e067d-b0ec-47c6-ad04-0c0d08dde712" containerName="oc" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.156783 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28e067d-b0ec-47c6-ad04-0c0d08dde712" containerName="oc" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.157056 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28e067d-b0ec-47c6-ad04-0c0d08dde712" containerName="oc" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.157931 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.160126 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.160232 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.168145 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z"] Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.225586 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.225657 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.271407 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-secret-volume\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.271459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qczx8\" (UniqueName: \"kubernetes.io/projected/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-kube-api-access-qczx8\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.271622 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-config-volume\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.373061 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-secret-volume\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.373431 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qczx8\" (UniqueName: \"kubernetes.io/projected/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-kube-api-access-qczx8\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.373543 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-config-volume\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.374589 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-config-volume\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.380371 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-secret-volume\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.392233 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qczx8\" (UniqueName: \"kubernetes.io/projected/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-kube-api-access-qczx8\") pod \"collect-profiles-29568105-p8d5z\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.485447 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.702832 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:45:00 crc kubenswrapper[4932]: E0321 09:45:00.703403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:45:00 crc kubenswrapper[4932]: I0321 09:45:00.936482 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z"] Mar 21 09:45:01 crc kubenswrapper[4932]: I0321 09:45:01.402103 4932 generic.go:334] "Generic (PLEG): container finished" podID="dbc06b46-09cf-49bc-b960-7ebeb61f6de2" containerID="b6f8b26123bcdb0d11b8176cd3eab4d4fc30e3cf582dd76555b9d17f76c18d82" exitCode=0 Mar 21 09:45:01 crc kubenswrapper[4932]: I0321 09:45:01.402214 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" event={"ID":"dbc06b46-09cf-49bc-b960-7ebeb61f6de2","Type":"ContainerDied","Data":"b6f8b26123bcdb0d11b8176cd3eab4d4fc30e3cf582dd76555b9d17f76c18d82"} Mar 21 09:45:01 crc kubenswrapper[4932]: I0321 09:45:01.402420 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" event={"ID":"dbc06b46-09cf-49bc-b960-7ebeb61f6de2","Type":"ContainerStarted","Data":"018ac8ee362e234348803a07cd88c18fd4cad1ddffcea2fdd2a2723fcf5cd577"} Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.728134 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.824517 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-secret-volume\") pod \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.824858 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-config-volume\") pod \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.824888 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qczx8\" (UniqueName: \"kubernetes.io/projected/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-kube-api-access-qczx8\") pod \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\" (UID: \"dbc06b46-09cf-49bc-b960-7ebeb61f6de2\") " Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.825849 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-config-volume" (OuterVolumeSpecName: "config-volume") pod "dbc06b46-09cf-49bc-b960-7ebeb61f6de2" (UID: "dbc06b46-09cf-49bc-b960-7ebeb61f6de2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.832957 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dbc06b46-09cf-49bc-b960-7ebeb61f6de2" (UID: "dbc06b46-09cf-49bc-b960-7ebeb61f6de2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.833819 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-kube-api-access-qczx8" (OuterVolumeSpecName: "kube-api-access-qczx8") pod "dbc06b46-09cf-49bc-b960-7ebeb61f6de2" (UID: "dbc06b46-09cf-49bc-b960-7ebeb61f6de2"). InnerVolumeSpecName "kube-api-access-qczx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.927598 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.927655 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qczx8\" (UniqueName: \"kubernetes.io/projected/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-kube-api-access-qczx8\") on node \"crc\" DevicePath \"\"" Mar 21 09:45:02 crc kubenswrapper[4932]: I0321 09:45:02.927669 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbc06b46-09cf-49bc-b960-7ebeb61f6de2-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 09:45:03 crc kubenswrapper[4932]: I0321 09:45:03.419323 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" event={"ID":"dbc06b46-09cf-49bc-b960-7ebeb61f6de2","Type":"ContainerDied","Data":"018ac8ee362e234348803a07cd88c18fd4cad1ddffcea2fdd2a2723fcf5cd577"} Mar 21 09:45:03 crc kubenswrapper[4932]: I0321 09:45:03.419378 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018ac8ee362e234348803a07cd88c18fd4cad1ddffcea2fdd2a2723fcf5cd577" Mar 21 09:45:03 crc kubenswrapper[4932]: I0321 09:45:03.419748 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568105-p8d5z" Mar 21 09:45:03 crc kubenswrapper[4932]: I0321 09:45:03.800544 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x"] Mar 21 09:45:03 crc kubenswrapper[4932]: I0321 09:45:03.809120 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568060-fq46x"] Mar 21 09:45:05 crc kubenswrapper[4932]: I0321 09:45:05.714649 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8309fb7a-e364-4f9b-a723-a0d4926a0a51" path="/var/lib/kubelet/pods/8309fb7a-e364-4f9b-a723-a0d4926a0a51/volumes" Mar 21 09:45:06 crc kubenswrapper[4932]: I0321 09:45:06.702770 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:45:06 crc kubenswrapper[4932]: E0321 09:45:06.703187 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:45:11 crc kubenswrapper[4932]: I0321 09:45:11.703116 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:45:11 crc kubenswrapper[4932]: E0321 09:45:11.703907 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:45:19 crc kubenswrapper[4932]: I0321 09:45:19.702394 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:45:19 crc kubenswrapper[4932]: E0321 09:45:19.703089 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:45:25 crc kubenswrapper[4932]: I0321 09:45:25.702840 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:45:25 crc kubenswrapper[4932]: E0321 09:45:25.703628 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.225203 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.225808 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.226044 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.226869 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb920de1a494d2a671ce4ad301f4c34f2752b4a5b457b8ced984003f2c7dee4e"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.226926 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://fb920de1a494d2a671ce4ad301f4c34f2752b4a5b457b8ced984003f2c7dee4e" gracePeriod=600 Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.674693 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="fb920de1a494d2a671ce4ad301f4c34f2752b4a5b457b8ced984003f2c7dee4e" exitCode=0 Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.675203 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"fb920de1a494d2a671ce4ad301f4c34f2752b4a5b457b8ced984003f2c7dee4e"} Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.675244 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4"} Mar 21 09:45:30 crc kubenswrapper[4932]: I0321 09:45:30.675267 4932 scope.go:117] "RemoveContainer" containerID="ecc007ad070c6c79f062f071c3c377c303e3d30b36b54a12c3b412d9fb32b1bb" Mar 21 09:45:31 crc kubenswrapper[4932]: I0321 09:45:31.702291 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:45:31 crc kubenswrapper[4932]: E0321 09:45:31.703121 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:45:32 crc kubenswrapper[4932]: I0321 09:45:32.196015 4932 scope.go:117] "RemoveContainer" containerID="024919ca6cc75d54fd07cac05a2e2869d6d1dd7301ca35177f4e73ea829a16c2" Mar 21 09:45:37 crc kubenswrapper[4932]: I0321 09:45:37.711383 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:45:37 crc kubenswrapper[4932]: E0321 09:45:37.712951 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:45:43 crc kubenswrapper[4932]: I0321 09:45:43.702587 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:45:43 crc kubenswrapper[4932]: E0321 09:45:43.704369 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:45:50 crc kubenswrapper[4932]: I0321 09:45:50.702022 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:45:50 crc kubenswrapper[4932]: E0321 09:45:50.702802 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:45:54 crc kubenswrapper[4932]: I0321 09:45:54.702622 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:45:54 crc kubenswrapper[4932]: E0321 09:45:54.703413 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.157758 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568106-rj7tb"] Mar 21 09:46:00 crc kubenswrapper[4932]: E0321 09:46:00.159818 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc06b46-09cf-49bc-b960-7ebeb61f6de2" containerName="collect-profiles" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.159849 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc06b46-09cf-49bc-b960-7ebeb61f6de2" containerName="collect-profiles" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.160211 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc06b46-09cf-49bc-b960-7ebeb61f6de2" containerName="collect-profiles" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.161921 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568106-rj7tb" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.164386 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.164434 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.166176 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.169526 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568106-rj7tb"] Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.255222 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclc6\" (UniqueName: \"kubernetes.io/projected/9d4c443e-2973-454d-8782-b8c03b70bf32-kube-api-access-cclc6\") pod \"auto-csr-approver-29568106-rj7tb\" (UID: \"9d4c443e-2973-454d-8782-b8c03b70bf32\") " pod="openshift-infra/auto-csr-approver-29568106-rj7tb" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.357148 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cclc6\" (UniqueName: \"kubernetes.io/projected/9d4c443e-2973-454d-8782-b8c03b70bf32-kube-api-access-cclc6\") pod \"auto-csr-approver-29568106-rj7tb\" (UID: \"9d4c443e-2973-454d-8782-b8c03b70bf32\") " pod="openshift-infra/auto-csr-approver-29568106-rj7tb" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.379415 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cclc6\" (UniqueName: \"kubernetes.io/projected/9d4c443e-2973-454d-8782-b8c03b70bf32-kube-api-access-cclc6\") pod \"auto-csr-approver-29568106-rj7tb\" (UID: \"9d4c443e-2973-454d-8782-b8c03b70bf32\") " pod="openshift-infra/auto-csr-approver-29568106-rj7tb" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.489493 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568106-rj7tb" Mar 21 09:46:00 crc kubenswrapper[4932]: I0321 09:46:00.940173 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568106-rj7tb"] Mar 21 09:46:00 crc kubenswrapper[4932]: W0321 09:46:00.943508 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d4c443e_2973_454d_8782_b8c03b70bf32.slice/crio-329718e5870fcb3f969705f3866887805507e111cb0e9e3ee4c66a547c56c654 WatchSource:0}: Error finding container 329718e5870fcb3f969705f3866887805507e111cb0e9e3ee4c66a547c56c654: Status 404 returned error can't find the container with id 329718e5870fcb3f969705f3866887805507e111cb0e9e3ee4c66a547c56c654 Mar 21 09:46:01 crc kubenswrapper[4932]: I0321 09:46:01.970171 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568106-rj7tb" event={"ID":"9d4c443e-2973-454d-8782-b8c03b70bf32","Type":"ContainerStarted","Data":"329718e5870fcb3f969705f3866887805507e111cb0e9e3ee4c66a547c56c654"} Mar 21 09:46:02 crc kubenswrapper[4932]: I0321 09:46:02.979720 4932 generic.go:334] "Generic (PLEG): container finished" podID="9d4c443e-2973-454d-8782-b8c03b70bf32" containerID="33b8d621d18fb42819300ab00a9022784d81eb52590fee83cababbb78322becb" exitCode=0 Mar 21 09:46:02 crc kubenswrapper[4932]: I0321 09:46:02.979781 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568106-rj7tb" event={"ID":"9d4c443e-2973-454d-8782-b8c03b70bf32","Type":"ContainerDied","Data":"33b8d621d18fb42819300ab00a9022784d81eb52590fee83cababbb78322becb"} Mar 21 09:46:04 crc kubenswrapper[4932]: I0321 09:46:04.448572 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568106-rj7tb" Mar 21 09:46:04 crc kubenswrapper[4932]: I0321 09:46:04.544843 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cclc6\" (UniqueName: \"kubernetes.io/projected/9d4c443e-2973-454d-8782-b8c03b70bf32-kube-api-access-cclc6\") pod \"9d4c443e-2973-454d-8782-b8c03b70bf32\" (UID: \"9d4c443e-2973-454d-8782-b8c03b70bf32\") " Mar 21 09:46:04 crc kubenswrapper[4932]: I0321 09:46:04.554549 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4c443e-2973-454d-8782-b8c03b70bf32-kube-api-access-cclc6" (OuterVolumeSpecName: "kube-api-access-cclc6") pod "9d4c443e-2973-454d-8782-b8c03b70bf32" (UID: "9d4c443e-2973-454d-8782-b8c03b70bf32"). InnerVolumeSpecName "kube-api-access-cclc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:46:04 crc kubenswrapper[4932]: I0321 09:46:04.647962 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cclc6\" (UniqueName: \"kubernetes.io/projected/9d4c443e-2973-454d-8782-b8c03b70bf32-kube-api-access-cclc6\") on node \"crc\" DevicePath \"\"" Mar 21 09:46:04 crc kubenswrapper[4932]: I0321 09:46:04.704022 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:46:04 crc kubenswrapper[4932]: E0321 09:46:04.704443 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:46:05 crc kubenswrapper[4932]: I0321 09:46:05.000758 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568106-rj7tb" event={"ID":"9d4c443e-2973-454d-8782-b8c03b70bf32","Type":"ContainerDied","Data":"329718e5870fcb3f969705f3866887805507e111cb0e9e3ee4c66a547c56c654"} Mar 21 09:46:05 crc kubenswrapper[4932]: I0321 09:46:05.000830 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="329718e5870fcb3f969705f3866887805507e111cb0e9e3ee4c66a547c56c654" Mar 21 09:46:05 crc kubenswrapper[4932]: I0321 09:46:05.000862 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568106-rj7tb" Mar 21 09:46:05 crc kubenswrapper[4932]: I0321 09:46:05.550566 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568100-d98vn"] Mar 21 09:46:05 crc kubenswrapper[4932]: I0321 09:46:05.561308 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568100-d98vn"] Mar 21 09:46:05 crc kubenswrapper[4932]: I0321 09:46:05.717772 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000d6da3-2273-4977-9052-c5e9cdbdf740" path="/var/lib/kubelet/pods/000d6da3-2273-4977-9052-c5e9cdbdf740/volumes" Mar 21 09:46:07 crc kubenswrapper[4932]: I0321 09:46:07.709144 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:46:07 crc kubenswrapper[4932]: E0321 09:46:07.710085 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:46:16 crc kubenswrapper[4932]: I0321 09:46:16.703923 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:46:16 crc kubenswrapper[4932]: E0321 09:46:16.705537 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:46:20 crc kubenswrapper[4932]: I0321 09:46:20.702950 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:46:20 crc kubenswrapper[4932]: E0321 09:46:20.703686 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:46:27 crc kubenswrapper[4932]: I0321 09:46:27.710464 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:46:27 crc kubenswrapper[4932]: E0321 09:46:27.711111 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:46:32 crc kubenswrapper[4932]: I0321 09:46:32.255961 4932 scope.go:117] "RemoveContainer" containerID="19657af7ab466fe8c10169c1f2a013fa45ecd52529fd1833addea2147d2be227" Mar 21 09:46:32 crc kubenswrapper[4932]: I0321 09:46:32.703618 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:46:32 crc kubenswrapper[4932]: E0321 09:46:32.704615 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:46:39 crc kubenswrapper[4932]: I0321 09:46:39.702566 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:46:39 crc kubenswrapper[4932]: E0321 09:46:39.703365 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:46:47 crc kubenswrapper[4932]: I0321 09:46:47.709964 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:46:47 crc kubenswrapper[4932]: E0321 09:46:47.711379 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:46:50 crc kubenswrapper[4932]: I0321 09:46:50.702754 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:46:50 crc kubenswrapper[4932]: E0321 09:46:50.703244 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:46:58 crc kubenswrapper[4932]: I0321 09:46:58.702714 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:46:58 crc kubenswrapper[4932]: E0321 09:46:58.703436 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:47:05 crc kubenswrapper[4932]: I0321 09:47:05.703623 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:47:06 crc kubenswrapper[4932]: I0321 09:47:06.597115 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b"} Mar 21 09:47:07 crc kubenswrapper[4932]: I0321 09:47:07.740355 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:47:07 crc kubenswrapper[4932]: I0321 09:47:07.740654 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:47:11 crc kubenswrapper[4932]: I0321 09:47:11.702986 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:47:12 crc kubenswrapper[4932]: I0321 09:47:12.683011 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83"} Mar 21 09:47:14 crc kubenswrapper[4932]: I0321 09:47:14.711417 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" exitCode=1 Mar 21 09:47:14 crc kubenswrapper[4932]: I0321 09:47:14.712000 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b"} Mar 21 09:47:14 crc kubenswrapper[4932]: I0321 09:47:14.712056 4932 scope.go:117] "RemoveContainer" containerID="8f19437b9648e2b4d6dceac36f15589e74e7f7a54ec3660b7a110a6ca4a23034" Mar 21 09:47:14 crc kubenswrapper[4932]: I0321 09:47:14.713061 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:47:14 crc kubenswrapper[4932]: E0321 09:47:14.713359 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:47:17 crc kubenswrapper[4932]: I0321 09:47:17.740948 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:47:17 crc kubenswrapper[4932]: I0321 09:47:17.741816 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:47:17 crc kubenswrapper[4932]: I0321 09:47:17.743096 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:47:17 crc kubenswrapper[4932]: E0321 09:47:17.743443 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:47:17 crc kubenswrapper[4932]: I0321 09:47:17.948536 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:47:17 crc kubenswrapper[4932]: I0321 09:47:17.948660 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:47:20 crc kubenswrapper[4932]: I0321 09:47:20.778364 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" exitCode=1 Mar 21 09:47:20 crc kubenswrapper[4932]: I0321 09:47:20.778458 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83"} Mar 21 09:47:20 crc kubenswrapper[4932]: I0321 09:47:20.778785 4932 scope.go:117] "RemoveContainer" containerID="17536df4d534aa561a5d386d64716f038aea50b237cdcd989cb4b8856ec220a0" Mar 21 09:47:20 crc kubenswrapper[4932]: I0321 09:47:20.779659 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:47:20 crc kubenswrapper[4932]: E0321 09:47:20.780013 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:47:27 crc kubenswrapper[4932]: I0321 09:47:27.947907 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:47:27 crc kubenswrapper[4932]: I0321 09:47:27.948871 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:47:27 crc kubenswrapper[4932]: I0321 09:47:27.950395 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:47:27 crc kubenswrapper[4932]: E0321 09:47:27.951073 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:47:30 crc kubenswrapper[4932]: I0321 09:47:30.225693 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:47:30 crc kubenswrapper[4932]: I0321 09:47:30.226185 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:47:30 crc kubenswrapper[4932]: I0321 09:47:30.702901 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:47:30 crc kubenswrapper[4932]: E0321 09:47:30.703207 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:47:42 crc kubenswrapper[4932]: I0321 09:47:42.702062 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:47:42 crc kubenswrapper[4932]: E0321 09:47:42.703110 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:47:43 crc kubenswrapper[4932]: I0321 09:47:43.703765 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:47:43 crc kubenswrapper[4932]: E0321 09:47:43.704059 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:47:54 crc kubenswrapper[4932]: I0321 09:47:54.702532 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:47:54 crc kubenswrapper[4932]: E0321 09:47:54.703490 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:47:56 crc kubenswrapper[4932]: I0321 09:47:56.702732 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:47:56 crc kubenswrapper[4932]: E0321 09:47:56.703826 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.146693 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568108-n282s"] Mar 21 09:48:00 crc kubenswrapper[4932]: E0321 09:48:00.147077 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4c443e-2973-454d-8782-b8c03b70bf32" containerName="oc" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.147090 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4c443e-2973-454d-8782-b8c03b70bf32" containerName="oc" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.147328 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4c443e-2973-454d-8782-b8c03b70bf32" containerName="oc" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.147986 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568108-n282s" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.150916 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.152173 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.156763 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.167909 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568108-n282s"] Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.173376 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt68j\" (UniqueName: \"kubernetes.io/projected/32688360-089e-4505-88a5-ea029c27868b-kube-api-access-rt68j\") pod \"auto-csr-approver-29568108-n282s\" (UID: \"32688360-089e-4505-88a5-ea029c27868b\") " pod="openshift-infra/auto-csr-approver-29568108-n282s" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.225747 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.225810 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.275931 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt68j\" (UniqueName: \"kubernetes.io/projected/32688360-089e-4505-88a5-ea029c27868b-kube-api-access-rt68j\") pod \"auto-csr-approver-29568108-n282s\" (UID: \"32688360-089e-4505-88a5-ea029c27868b\") " pod="openshift-infra/auto-csr-approver-29568108-n282s" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.298232 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt68j\" (UniqueName: \"kubernetes.io/projected/32688360-089e-4505-88a5-ea029c27868b-kube-api-access-rt68j\") pod \"auto-csr-approver-29568108-n282s\" (UID: \"32688360-089e-4505-88a5-ea029c27868b\") " pod="openshift-infra/auto-csr-approver-29568108-n282s" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.474928 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568108-n282s" Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.982658 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568108-n282s"] Mar 21 09:48:00 crc kubenswrapper[4932]: I0321 09:48:00.986251 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:48:01 crc kubenswrapper[4932]: I0321 09:48:01.239871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568108-n282s" event={"ID":"32688360-089e-4505-88a5-ea029c27868b","Type":"ContainerStarted","Data":"2b2de95ca2437ae154439d40a2008e85f69a6e91aff1109a43a0e1533e8d0b66"} Mar 21 09:48:02 crc kubenswrapper[4932]: I0321 09:48:02.253883 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568108-n282s" event={"ID":"32688360-089e-4505-88a5-ea029c27868b","Type":"ContainerStarted","Data":"615a99fd154d867acd636b429f81db7b78c1d3de3e419dc742aacb7f6c37eb30"} Mar 21 09:48:03 crc kubenswrapper[4932]: I0321 09:48:03.266382 4932 generic.go:334] "Generic (PLEG): container finished" podID="32688360-089e-4505-88a5-ea029c27868b" containerID="615a99fd154d867acd636b429f81db7b78c1d3de3e419dc742aacb7f6c37eb30" exitCode=0 Mar 21 09:48:03 crc kubenswrapper[4932]: I0321 09:48:03.266509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568108-n282s" event={"ID":"32688360-089e-4505-88a5-ea029c27868b","Type":"ContainerDied","Data":"615a99fd154d867acd636b429f81db7b78c1d3de3e419dc742aacb7f6c37eb30"} Mar 21 09:48:04 crc kubenswrapper[4932]: I0321 09:48:04.649668 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568108-n282s" Mar 21 09:48:04 crc kubenswrapper[4932]: I0321 09:48:04.794954 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt68j\" (UniqueName: \"kubernetes.io/projected/32688360-089e-4505-88a5-ea029c27868b-kube-api-access-rt68j\") pod \"32688360-089e-4505-88a5-ea029c27868b\" (UID: \"32688360-089e-4505-88a5-ea029c27868b\") " Mar 21 09:48:04 crc kubenswrapper[4932]: I0321 09:48:04.803973 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32688360-089e-4505-88a5-ea029c27868b-kube-api-access-rt68j" (OuterVolumeSpecName: "kube-api-access-rt68j") pod "32688360-089e-4505-88a5-ea029c27868b" (UID: "32688360-089e-4505-88a5-ea029c27868b"). InnerVolumeSpecName "kube-api-access-rt68j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:48:04 crc kubenswrapper[4932]: I0321 09:48:04.900518 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt68j\" (UniqueName: \"kubernetes.io/projected/32688360-089e-4505-88a5-ea029c27868b-kube-api-access-rt68j\") on node \"crc\" DevicePath \"\"" Mar 21 09:48:05 crc kubenswrapper[4932]: I0321 09:48:05.295382 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568108-n282s" event={"ID":"32688360-089e-4505-88a5-ea029c27868b","Type":"ContainerDied","Data":"2b2de95ca2437ae154439d40a2008e85f69a6e91aff1109a43a0e1533e8d0b66"} Mar 21 09:48:05 crc kubenswrapper[4932]: I0321 09:48:05.295469 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568108-n282s" Mar 21 09:48:05 crc kubenswrapper[4932]: I0321 09:48:05.295489 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b2de95ca2437ae154439d40a2008e85f69a6e91aff1109a43a0e1533e8d0b66" Mar 21 09:48:05 crc kubenswrapper[4932]: I0321 09:48:05.381203 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568102-gbcxq"] Mar 21 09:48:05 crc kubenswrapper[4932]: I0321 09:48:05.392653 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568102-gbcxq"] Mar 21 09:48:05 crc kubenswrapper[4932]: I0321 09:48:05.717938 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2108c3f0-7756-4dec-bb8d-065b25538683" path="/var/lib/kubelet/pods/2108c3f0-7756-4dec-bb8d-065b25538683/volumes" Mar 21 09:48:08 crc kubenswrapper[4932]: I0321 09:48:08.704787 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:48:08 crc kubenswrapper[4932]: E0321 09:48:08.706021 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:48:09 crc kubenswrapper[4932]: I0321 09:48:09.945128 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f2j76"] Mar 21 09:48:09 crc kubenswrapper[4932]: E0321 09:48:09.945863 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32688360-089e-4505-88a5-ea029c27868b" containerName="oc" Mar 21 09:48:09 crc kubenswrapper[4932]: I0321 09:48:09.945888 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="32688360-089e-4505-88a5-ea029c27868b" containerName="oc" Mar 21 09:48:09 crc kubenswrapper[4932]: I0321 09:48:09.946303 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="32688360-089e-4505-88a5-ea029c27868b" containerName="oc" Mar 21 09:48:09 crc kubenswrapper[4932]: I0321 09:48:09.948896 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:09 crc kubenswrapper[4932]: I0321 09:48:09.957179 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f2j76"] Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.133056 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-catalog-content\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.133674 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-utilities\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.133731 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8d7\" (UniqueName: \"kubernetes.io/projected/69cd1e0f-c117-4b51-8123-22be364a3027-kube-api-access-4t8d7\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.235979 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t8d7\" (UniqueName: \"kubernetes.io/projected/69cd1e0f-c117-4b51-8123-22be364a3027-kube-api-access-4t8d7\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.236163 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-catalog-content\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.236224 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-utilities\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.236772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-utilities\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.236988 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-catalog-content\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.261995 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t8d7\" (UniqueName: \"kubernetes.io/projected/69cd1e0f-c117-4b51-8123-22be364a3027-kube-api-access-4t8d7\") pod \"certified-operators-f2j76\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.273017 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:10 crc kubenswrapper[4932]: I0321 09:48:10.625219 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f2j76"] Mar 21 09:48:11 crc kubenswrapper[4932]: I0321 09:48:11.374761 4932 generic.go:334] "Generic (PLEG): container finished" podID="69cd1e0f-c117-4b51-8123-22be364a3027" containerID="975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3" exitCode=0 Mar 21 09:48:11 crc kubenswrapper[4932]: I0321 09:48:11.374842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2j76" event={"ID":"69cd1e0f-c117-4b51-8123-22be364a3027","Type":"ContainerDied","Data":"975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3"} Mar 21 09:48:11 crc kubenswrapper[4932]: I0321 09:48:11.375047 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2j76" event={"ID":"69cd1e0f-c117-4b51-8123-22be364a3027","Type":"ContainerStarted","Data":"f4508b21eacc399768afd61460cbc9ffbf0cb34e48b4ae7e3476ce19540e02b2"} Mar 21 09:48:11 crc kubenswrapper[4932]: I0321 09:48:11.702583 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:48:11 crc kubenswrapper[4932]: E0321 09:48:11.702981 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:48:12 crc kubenswrapper[4932]: I0321 09:48:12.390132 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2j76" event={"ID":"69cd1e0f-c117-4b51-8123-22be364a3027","Type":"ContainerStarted","Data":"874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876"} Mar 21 09:48:13 crc kubenswrapper[4932]: I0321 09:48:13.405763 4932 generic.go:334] "Generic (PLEG): container finished" podID="69cd1e0f-c117-4b51-8123-22be364a3027" containerID="874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876" exitCode=0 Mar 21 09:48:13 crc kubenswrapper[4932]: I0321 09:48:13.406019 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2j76" event={"ID":"69cd1e0f-c117-4b51-8123-22be364a3027","Type":"ContainerDied","Data":"874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876"} Mar 21 09:48:14 crc kubenswrapper[4932]: I0321 09:48:14.418395 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2j76" event={"ID":"69cd1e0f-c117-4b51-8123-22be364a3027","Type":"ContainerStarted","Data":"ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c"} Mar 21 09:48:14 crc kubenswrapper[4932]: I0321 09:48:14.454040 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f2j76" podStartSLOduration=2.880986664 podStartE2EDuration="5.454009188s" podCreationTimestamp="2026-03-21 09:48:09 +0000 UTC" firstStartedPulling="2026-03-21 09:48:11.379408271 +0000 UTC m=+2994.974606540" lastFinishedPulling="2026-03-21 09:48:13.952430795 +0000 UTC m=+2997.547629064" observedRunningTime="2026-03-21 09:48:14.444194731 +0000 UTC m=+2998.039393000" watchObservedRunningTime="2026-03-21 09:48:14.454009188 +0000 UTC m=+2998.049207457" Mar 21 09:48:20 crc kubenswrapper[4932]: I0321 09:48:20.273932 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:20 crc kubenswrapper[4932]: I0321 09:48:20.274432 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:20 crc kubenswrapper[4932]: I0321 09:48:20.319479 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:20 crc kubenswrapper[4932]: I0321 09:48:20.521960 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:20 crc kubenswrapper[4932]: I0321 09:48:20.579102 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f2j76"] Mar 21 09:48:22 crc kubenswrapper[4932]: I0321 09:48:22.490953 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f2j76" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" containerName="registry-server" containerID="cri-o://ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c" gracePeriod=2 Mar 21 09:48:22 crc kubenswrapper[4932]: I0321 09:48:22.981513 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.081617 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t8d7\" (UniqueName: \"kubernetes.io/projected/69cd1e0f-c117-4b51-8123-22be364a3027-kube-api-access-4t8d7\") pod \"69cd1e0f-c117-4b51-8123-22be364a3027\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.081949 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-utilities\") pod \"69cd1e0f-c117-4b51-8123-22be364a3027\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.081999 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-catalog-content\") pod \"69cd1e0f-c117-4b51-8123-22be364a3027\" (UID: \"69cd1e0f-c117-4b51-8123-22be364a3027\") " Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.083059 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-utilities" (OuterVolumeSpecName: "utilities") pod "69cd1e0f-c117-4b51-8123-22be364a3027" (UID: "69cd1e0f-c117-4b51-8123-22be364a3027"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.089038 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69cd1e0f-c117-4b51-8123-22be364a3027-kube-api-access-4t8d7" (OuterVolumeSpecName: "kube-api-access-4t8d7") pod "69cd1e0f-c117-4b51-8123-22be364a3027" (UID: "69cd1e0f-c117-4b51-8123-22be364a3027"). InnerVolumeSpecName "kube-api-access-4t8d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.185009 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t8d7\" (UniqueName: \"kubernetes.io/projected/69cd1e0f-c117-4b51-8123-22be364a3027-kube-api-access-4t8d7\") on node \"crc\" DevicePath \"\"" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.185048 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.501485 4932 generic.go:334] "Generic (PLEG): container finished" podID="69cd1e0f-c117-4b51-8123-22be364a3027" containerID="ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c" exitCode=0 Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.501574 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2j76" event={"ID":"69cd1e0f-c117-4b51-8123-22be364a3027","Type":"ContainerDied","Data":"ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c"} Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.502695 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f2j76" event={"ID":"69cd1e0f-c117-4b51-8123-22be364a3027","Type":"ContainerDied","Data":"f4508b21eacc399768afd61460cbc9ffbf0cb34e48b4ae7e3476ce19540e02b2"} Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.501649 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f2j76" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.502778 4932 scope.go:117] "RemoveContainer" containerID="ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.526492 4932 scope.go:117] "RemoveContainer" containerID="874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.560390 4932 scope.go:117] "RemoveContainer" containerID="975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.596666 4932 scope.go:117] "RemoveContainer" containerID="ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c" Mar 21 09:48:23 crc kubenswrapper[4932]: E0321 09:48:23.597244 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c\": container with ID starting with ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c not found: ID does not exist" containerID="ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.597297 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c"} err="failed to get container status \"ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c\": rpc error: code = NotFound desc = could not find container \"ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c\": container with ID starting with ae660834e649dc5bf2282670e4c3fe49c825ac81063fdd02841779265e1cd10c not found: ID does not exist" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.597327 4932 scope.go:117] "RemoveContainer" containerID="874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876" Mar 21 09:48:23 crc kubenswrapper[4932]: E0321 09:48:23.598197 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876\": container with ID starting with 874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876 not found: ID does not exist" containerID="874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.598227 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876"} err="failed to get container status \"874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876\": rpc error: code = NotFound desc = could not find container \"874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876\": container with ID starting with 874f0ca22dd4f6f49d9935e3753770025351414af7ba1659e4c6e68021eea876 not found: ID does not exist" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.598249 4932 scope.go:117] "RemoveContainer" containerID="975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3" Mar 21 09:48:23 crc kubenswrapper[4932]: E0321 09:48:23.598505 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3\": container with ID starting with 975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3 not found: ID does not exist" containerID="975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.598619 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3"} err="failed to get container status \"975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3\": rpc error: code = NotFound desc = could not find container \"975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3\": container with ID starting with 975d65e46636a2fd76bda82f7d03b9c20f44bc0c0ecc28b27a9b6ede7ff1a5c3 not found: ID does not exist" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.702492 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:48:23 crc kubenswrapper[4932]: E0321 09:48:23.702722 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:48:23 crc kubenswrapper[4932]: I0321 09:48:23.938500 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69cd1e0f-c117-4b51-8123-22be364a3027" (UID: "69cd1e0f-c117-4b51-8123-22be364a3027"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:48:24 crc kubenswrapper[4932]: I0321 09:48:24.001162 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69cd1e0f-c117-4b51-8123-22be364a3027-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:48:24 crc kubenswrapper[4932]: I0321 09:48:24.136447 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f2j76"] Mar 21 09:48:24 crc kubenswrapper[4932]: I0321 09:48:24.143748 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f2j76"] Mar 21 09:48:25 crc kubenswrapper[4932]: I0321 09:48:25.716512 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" path="/var/lib/kubelet/pods/69cd1e0f-c117-4b51-8123-22be364a3027/volumes" Mar 21 09:48:26 crc kubenswrapper[4932]: I0321 09:48:26.703184 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:48:26 crc kubenswrapper[4932]: E0321 09:48:26.703450 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.225687 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.226195 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.226235 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.227036 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.227090 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" gracePeriod=600 Mar 21 09:48:30 crc kubenswrapper[4932]: E0321 09:48:30.347163 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.578794 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" exitCode=0 Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.578841 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4"} Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.578887 4932 scope.go:117] "RemoveContainer" containerID="fb920de1a494d2a671ce4ad301f4c34f2752b4a5b457b8ced984003f2c7dee4e" Mar 21 09:48:30 crc kubenswrapper[4932]: I0321 09:48:30.579602 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:48:30 crc kubenswrapper[4932]: E0321 09:48:30.579894 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:48:32 crc kubenswrapper[4932]: I0321 09:48:32.345547 4932 scope.go:117] "RemoveContainer" containerID="739baa3e2e114c0aba6486c3f239b0279a1729cda930fe7c49e3e10630d35668" Mar 21 09:48:37 crc kubenswrapper[4932]: I0321 09:48:37.711896 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:48:37 crc kubenswrapper[4932]: E0321 09:48:37.720106 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:48:38 crc kubenswrapper[4932]: I0321 09:48:38.703503 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:48:38 crc kubenswrapper[4932]: E0321 09:48:38.703929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:48:45 crc kubenswrapper[4932]: I0321 09:48:45.702610 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:48:45 crc kubenswrapper[4932]: E0321 09:48:45.703574 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:48:49 crc kubenswrapper[4932]: I0321 09:48:49.702026 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:48:49 crc kubenswrapper[4932]: E0321 09:48:49.702596 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:48:53 crc kubenswrapper[4932]: I0321 09:48:53.703181 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:48:53 crc kubenswrapper[4932]: E0321 09:48:53.703941 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.684536 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cqpk8"] Mar 21 09:48:55 crc kubenswrapper[4932]: E0321 09:48:55.685176 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" containerName="registry-server" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.685213 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" containerName="registry-server" Mar 21 09:48:55 crc kubenswrapper[4932]: E0321 09:48:55.685228 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" containerName="extract-content" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.685234 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" containerName="extract-content" Mar 21 09:48:55 crc kubenswrapper[4932]: E0321 09:48:55.685261 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" containerName="extract-utilities" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.685270 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" containerName="extract-utilities" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.685540 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="69cd1e0f-c117-4b51-8123-22be364a3027" containerName="registry-server" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.688789 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.721338 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqpk8"] Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.753967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-catalog-content\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.754291 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4m2\" (UniqueName: \"kubernetes.io/projected/860f806c-cc32-4f03-9863-1ca51c4b4262-kube-api-access-dk4m2\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.754323 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-utilities\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.856289 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4m2\" (UniqueName: \"kubernetes.io/projected/860f806c-cc32-4f03-9863-1ca51c4b4262-kube-api-access-dk4m2\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.856628 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-utilities\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.856678 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-catalog-content\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.857217 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-catalog-content\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.857231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-utilities\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:55 crc kubenswrapper[4932]: I0321 09:48:55.875918 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4m2\" (UniqueName: \"kubernetes.io/projected/860f806c-cc32-4f03-9863-1ca51c4b4262-kube-api-access-dk4m2\") pod \"community-operators-cqpk8\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:56 crc kubenswrapper[4932]: I0321 09:48:56.011233 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:48:56 crc kubenswrapper[4932]: I0321 09:48:56.547410 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqpk8"] Mar 21 09:48:56 crc kubenswrapper[4932]: I0321 09:48:56.702994 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:48:56 crc kubenswrapper[4932]: E0321 09:48:56.703650 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:48:56 crc kubenswrapper[4932]: I0321 09:48:56.870651 4932 generic.go:334] "Generic (PLEG): container finished" podID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerID="ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67" exitCode=0 Mar 21 09:48:56 crc kubenswrapper[4932]: I0321 09:48:56.871062 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqpk8" event={"ID":"860f806c-cc32-4f03-9863-1ca51c4b4262","Type":"ContainerDied","Data":"ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67"} Mar 21 09:48:56 crc kubenswrapper[4932]: I0321 09:48:56.871152 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqpk8" event={"ID":"860f806c-cc32-4f03-9863-1ca51c4b4262","Type":"ContainerStarted","Data":"922cfaed4fa2885fc7d6f2a9fcdb0cec40737119dcb395f6c0be21cd97926eed"} Mar 21 09:48:57 crc kubenswrapper[4932]: I0321 09:48:57.881501 4932 generic.go:334] "Generic (PLEG): container finished" podID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerID="f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b" exitCode=0 Mar 21 09:48:57 crc kubenswrapper[4932]: I0321 09:48:57.881550 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqpk8" event={"ID":"860f806c-cc32-4f03-9863-1ca51c4b4262","Type":"ContainerDied","Data":"f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b"} Mar 21 09:48:58 crc kubenswrapper[4932]: I0321 09:48:58.894233 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqpk8" event={"ID":"860f806c-cc32-4f03-9863-1ca51c4b4262","Type":"ContainerStarted","Data":"952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274"} Mar 21 09:48:58 crc kubenswrapper[4932]: I0321 09:48:58.920989 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cqpk8" podStartSLOduration=2.44414781 podStartE2EDuration="3.920939623s" podCreationTimestamp="2026-03-21 09:48:55 +0000 UTC" firstStartedPulling="2026-03-21 09:48:56.873693349 +0000 UTC m=+3040.468891618" lastFinishedPulling="2026-03-21 09:48:58.350485162 +0000 UTC m=+3041.945683431" observedRunningTime="2026-03-21 09:48:58.91512041 +0000 UTC m=+3042.510318680" watchObservedRunningTime="2026-03-21 09:48:58.920939623 +0000 UTC m=+3042.516137902" Mar 21 09:49:00 crc kubenswrapper[4932]: I0321 09:49:00.702599 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:49:00 crc kubenswrapper[4932]: E0321 09:49:00.703077 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:49:06 crc kubenswrapper[4932]: I0321 09:49:06.011544 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:49:06 crc kubenswrapper[4932]: I0321 09:49:06.012382 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:49:06 crc kubenswrapper[4932]: I0321 09:49:06.065056 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:49:07 crc kubenswrapper[4932]: I0321 09:49:07.033366 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:49:07 crc kubenswrapper[4932]: I0321 09:49:07.098203 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqpk8"] Mar 21 09:49:07 crc kubenswrapper[4932]: I0321 09:49:07.709812 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:49:07 crc kubenswrapper[4932]: E0321 09:49:07.710102 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:49:08 crc kubenswrapper[4932]: I0321 09:49:08.704090 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:49:08 crc kubenswrapper[4932]: E0321 09:49:08.704977 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:49:08 crc kubenswrapper[4932]: I0321 09:49:08.988097 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cqpk8" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerName="registry-server" containerID="cri-o://952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274" gracePeriod=2 Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.450623 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.487285 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-utilities\") pod \"860f806c-cc32-4f03-9863-1ca51c4b4262\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.487492 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-catalog-content\") pod \"860f806c-cc32-4f03-9863-1ca51c4b4262\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.487556 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk4m2\" (UniqueName: \"kubernetes.io/projected/860f806c-cc32-4f03-9863-1ca51c4b4262-kube-api-access-dk4m2\") pod \"860f806c-cc32-4f03-9863-1ca51c4b4262\" (UID: \"860f806c-cc32-4f03-9863-1ca51c4b4262\") " Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.489638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-utilities" (OuterVolumeSpecName: "utilities") pod "860f806c-cc32-4f03-9863-1ca51c4b4262" (UID: "860f806c-cc32-4f03-9863-1ca51c4b4262"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.494600 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860f806c-cc32-4f03-9863-1ca51c4b4262-kube-api-access-dk4m2" (OuterVolumeSpecName: "kube-api-access-dk4m2") pod "860f806c-cc32-4f03-9863-1ca51c4b4262" (UID: "860f806c-cc32-4f03-9863-1ca51c4b4262"). InnerVolumeSpecName "kube-api-access-dk4m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.543809 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "860f806c-cc32-4f03-9863-1ca51c4b4262" (UID: "860f806c-cc32-4f03-9863-1ca51c4b4262"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.590877 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.591227 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860f806c-cc32-4f03-9863-1ca51c4b4262-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:49:09 crc kubenswrapper[4932]: I0321 09:49:09.591371 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk4m2\" (UniqueName: \"kubernetes.io/projected/860f806c-cc32-4f03-9863-1ca51c4b4262-kube-api-access-dk4m2\") on node \"crc\" DevicePath \"\"" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.000064 4932 generic.go:334] "Generic (PLEG): container finished" podID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerID="952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274" exitCode=0 Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.000106 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqpk8" event={"ID":"860f806c-cc32-4f03-9863-1ca51c4b4262","Type":"ContainerDied","Data":"952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274"} Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.000142 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqpk8" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.000159 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqpk8" event={"ID":"860f806c-cc32-4f03-9863-1ca51c4b4262","Type":"ContainerDied","Data":"922cfaed4fa2885fc7d6f2a9fcdb0cec40737119dcb395f6c0be21cd97926eed"} Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.000218 4932 scope.go:117] "RemoveContainer" containerID="952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.022943 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqpk8"] Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.029396 4932 scope.go:117] "RemoveContainer" containerID="f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.031564 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cqpk8"] Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.048978 4932 scope.go:117] "RemoveContainer" containerID="ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.110157 4932 scope.go:117] "RemoveContainer" containerID="952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274" Mar 21 09:49:10 crc kubenswrapper[4932]: E0321 09:49:10.110718 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274\": container with ID starting with 952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274 not found: ID does not exist" containerID="952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.110766 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274"} err="failed to get container status \"952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274\": rpc error: code = NotFound desc = could not find container \"952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274\": container with ID starting with 952e82e5844758d636205017b11cdac236026d9ff957e38874654224b8a66274 not found: ID does not exist" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.110797 4932 scope.go:117] "RemoveContainer" containerID="f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b" Mar 21 09:49:10 crc kubenswrapper[4932]: E0321 09:49:10.111167 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b\": container with ID starting with f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b not found: ID does not exist" containerID="f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.111212 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b"} err="failed to get container status \"f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b\": rpc error: code = NotFound desc = could not find container \"f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b\": container with ID starting with f002fe4a6cfd39f3de69d43972be57e1cd5f03696d893bf1f5b6f234040d1a4b not found: ID does not exist" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.111245 4932 scope.go:117] "RemoveContainer" containerID="ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67" Mar 21 09:49:10 crc kubenswrapper[4932]: E0321 09:49:10.111517 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67\": container with ID starting with ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67 not found: ID does not exist" containerID="ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67" Mar 21 09:49:10 crc kubenswrapper[4932]: I0321 09:49:10.111539 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67"} err="failed to get container status \"ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67\": rpc error: code = NotFound desc = could not find container \"ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67\": container with ID starting with ce370fe51aa79c494a999d4abcaee8d42256f53e1fe15bf115d1db24de8e9c67 not found: ID does not exist" Mar 21 09:49:11 crc kubenswrapper[4932]: I0321 09:49:11.717714 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" path="/var/lib/kubelet/pods/860f806c-cc32-4f03-9863-1ca51c4b4262/volumes" Mar 21 09:49:15 crc kubenswrapper[4932]: I0321 09:49:15.703293 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:49:15 crc kubenswrapper[4932]: E0321 09:49:15.704075 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:49:19 crc kubenswrapper[4932]: I0321 09:49:19.702694 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:49:19 crc kubenswrapper[4932]: E0321 09:49:19.703699 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:49:20 crc kubenswrapper[4932]: I0321 09:49:20.704646 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:49:20 crc kubenswrapper[4932]: E0321 09:49:20.705470 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:49:27 crc kubenswrapper[4932]: I0321 09:49:27.708008 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:49:27 crc kubenswrapper[4932]: E0321 09:49:27.708841 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:49:31 crc kubenswrapper[4932]: I0321 09:49:31.703180 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:49:31 crc kubenswrapper[4932]: E0321 09:49:31.703993 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:49:32 crc kubenswrapper[4932]: I0321 09:49:32.702913 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:49:32 crc kubenswrapper[4932]: E0321 09:49:32.703544 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:49:40 crc kubenswrapper[4932]: I0321 09:49:40.703465 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:49:40 crc kubenswrapper[4932]: E0321 09:49:40.704224 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:49:42 crc kubenswrapper[4932]: I0321 09:49:42.702277 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:49:42 crc kubenswrapper[4932]: E0321 09:49:42.702726 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:49:46 crc kubenswrapper[4932]: I0321 09:49:46.702832 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:49:46 crc kubenswrapper[4932]: E0321 09:49:46.703540 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:49:53 crc kubenswrapper[4932]: I0321 09:49:53.703091 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:49:53 crc kubenswrapper[4932]: E0321 09:49:53.703835 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:49:54 crc kubenswrapper[4932]: I0321 09:49:54.702235 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:49:54 crc kubenswrapper[4932]: E0321 09:49:54.702525 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.155535 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568110-v6n9z"] Mar 21 09:50:00 crc kubenswrapper[4932]: E0321 09:50:00.158120 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerName="registry-server" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.158194 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerName="registry-server" Mar 21 09:50:00 crc kubenswrapper[4932]: E0321 09:50:00.158215 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerName="extract-content" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.158222 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerName="extract-content" Mar 21 09:50:00 crc kubenswrapper[4932]: E0321 09:50:00.158260 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerName="extract-utilities" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.158267 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerName="extract-utilities" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.158528 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="860f806c-cc32-4f03-9863-1ca51c4b4262" containerName="registry-server" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.159393 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568110-v6n9z" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.162955 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.163023 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.162959 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.171168 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568110-v6n9z"] Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.266276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkx8m\" (UniqueName: \"kubernetes.io/projected/58ead2eb-b4fb-4a71-b94b-47e4b4159ff5-kube-api-access-zkx8m\") pod \"auto-csr-approver-29568110-v6n9z\" (UID: \"58ead2eb-b4fb-4a71-b94b-47e4b4159ff5\") " pod="openshift-infra/auto-csr-approver-29568110-v6n9z" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.369481 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkx8m\" (UniqueName: \"kubernetes.io/projected/58ead2eb-b4fb-4a71-b94b-47e4b4159ff5-kube-api-access-zkx8m\") pod \"auto-csr-approver-29568110-v6n9z\" (UID: \"58ead2eb-b4fb-4a71-b94b-47e4b4159ff5\") " pod="openshift-infra/auto-csr-approver-29568110-v6n9z" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.396866 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkx8m\" (UniqueName: \"kubernetes.io/projected/58ead2eb-b4fb-4a71-b94b-47e4b4159ff5-kube-api-access-zkx8m\") pod \"auto-csr-approver-29568110-v6n9z\" (UID: \"58ead2eb-b4fb-4a71-b94b-47e4b4159ff5\") " pod="openshift-infra/auto-csr-approver-29568110-v6n9z" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.497024 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568110-v6n9z" Mar 21 09:50:00 crc kubenswrapper[4932]: I0321 09:50:00.703592 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:50:00 crc kubenswrapper[4932]: E0321 09:50:00.704341 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:50:01 crc kubenswrapper[4932]: I0321 09:50:01.019871 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568110-v6n9z"] Mar 21 09:50:01 crc kubenswrapper[4932]: I0321 09:50:01.530533 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568110-v6n9z" event={"ID":"58ead2eb-b4fb-4a71-b94b-47e4b4159ff5","Type":"ContainerStarted","Data":"e458c759e6359056905d118a977658c92fee7ece113c8828aff3c42dfe61b135"} Mar 21 09:50:02 crc kubenswrapper[4932]: I0321 09:50:02.544566 4932 generic.go:334] "Generic (PLEG): container finished" podID="58ead2eb-b4fb-4a71-b94b-47e4b4159ff5" containerID="2087aaab03b9fab9f6743d3efeef41b8f1e07d5790f3a9c3184ac8c9539e9336" exitCode=0 Mar 21 09:50:02 crc kubenswrapper[4932]: I0321 09:50:02.544639 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568110-v6n9z" event={"ID":"58ead2eb-b4fb-4a71-b94b-47e4b4159ff5","Type":"ContainerDied","Data":"2087aaab03b9fab9f6743d3efeef41b8f1e07d5790f3a9c3184ac8c9539e9336"} Mar 21 09:50:03 crc kubenswrapper[4932]: I0321 09:50:03.963006 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568110-v6n9z" Mar 21 09:50:04 crc kubenswrapper[4932]: I0321 09:50:04.056935 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkx8m\" (UniqueName: \"kubernetes.io/projected/58ead2eb-b4fb-4a71-b94b-47e4b4159ff5-kube-api-access-zkx8m\") pod \"58ead2eb-b4fb-4a71-b94b-47e4b4159ff5\" (UID: \"58ead2eb-b4fb-4a71-b94b-47e4b4159ff5\") " Mar 21 09:50:04 crc kubenswrapper[4932]: I0321 09:50:04.063953 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ead2eb-b4fb-4a71-b94b-47e4b4159ff5-kube-api-access-zkx8m" (OuterVolumeSpecName: "kube-api-access-zkx8m") pod "58ead2eb-b4fb-4a71-b94b-47e4b4159ff5" (UID: "58ead2eb-b4fb-4a71-b94b-47e4b4159ff5"). InnerVolumeSpecName "kube-api-access-zkx8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:50:04 crc kubenswrapper[4932]: I0321 09:50:04.158572 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkx8m\" (UniqueName: \"kubernetes.io/projected/58ead2eb-b4fb-4a71-b94b-47e4b4159ff5-kube-api-access-zkx8m\") on node \"crc\" DevicePath \"\"" Mar 21 09:50:04 crc kubenswrapper[4932]: I0321 09:50:04.565507 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568110-v6n9z" event={"ID":"58ead2eb-b4fb-4a71-b94b-47e4b4159ff5","Type":"ContainerDied","Data":"e458c759e6359056905d118a977658c92fee7ece113c8828aff3c42dfe61b135"} Mar 21 09:50:04 crc kubenswrapper[4932]: I0321 09:50:04.565843 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e458c759e6359056905d118a977658c92fee7ece113c8828aff3c42dfe61b135" Mar 21 09:50:04 crc kubenswrapper[4932]: I0321 09:50:04.565565 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568110-v6n9z" Mar 21 09:50:05 crc kubenswrapper[4932]: I0321 09:50:05.070045 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568104-g5g8q"] Mar 21 09:50:05 crc kubenswrapper[4932]: I0321 09:50:05.078689 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568104-g5g8q"] Mar 21 09:50:05 crc kubenswrapper[4932]: I0321 09:50:05.703097 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:50:05 crc kubenswrapper[4932]: E0321 09:50:05.703530 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:50:05 crc kubenswrapper[4932]: I0321 09:50:05.717589 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f28e067d-b0ec-47c6-ad04-0c0d08dde712" path="/var/lib/kubelet/pods/f28e067d-b0ec-47c6-ad04-0c0d08dde712/volumes" Mar 21 09:50:07 crc kubenswrapper[4932]: I0321 09:50:07.711398 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:50:07 crc kubenswrapper[4932]: E0321 09:50:07.712028 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:50:15 crc kubenswrapper[4932]: I0321 09:50:15.703764 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:50:15 crc kubenswrapper[4932]: E0321 09:50:15.704861 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:50:16 crc kubenswrapper[4932]: I0321 09:50:16.704595 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:50:16 crc kubenswrapper[4932]: E0321 09:50:16.706974 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:50:21 crc kubenswrapper[4932]: I0321 09:50:21.703698 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:50:21 crc kubenswrapper[4932]: E0321 09:50:21.704850 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:50:26 crc kubenswrapper[4932]: I0321 09:50:26.702993 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:50:26 crc kubenswrapper[4932]: E0321 09:50:26.704313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:50:27 crc kubenswrapper[4932]: I0321 09:50:27.711339 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:50:27 crc kubenswrapper[4932]: E0321 09:50:27.711952 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:50:32 crc kubenswrapper[4932]: I0321 09:50:32.476695 4932 scope.go:117] "RemoveContainer" containerID="a54b60d43e5a05d9803a41fdddb63fab05767eec9bf679c712cf278c3dbbaf01" Mar 21 09:50:32 crc kubenswrapper[4932]: I0321 09:50:32.703327 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:50:32 crc kubenswrapper[4932]: E0321 09:50:32.703731 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:50:38 crc kubenswrapper[4932]: I0321 09:50:38.703086 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:50:38 crc kubenswrapper[4932]: E0321 09:50:38.703740 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:50:39 crc kubenswrapper[4932]: I0321 09:50:39.702392 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:50:39 crc kubenswrapper[4932]: E0321 09:50:39.702984 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:50:45 crc kubenswrapper[4932]: I0321 09:50:45.702939 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:50:45 crc kubenswrapper[4932]: E0321 09:50:45.703574 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:50:52 crc kubenswrapper[4932]: I0321 09:50:52.702694 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:50:52 crc kubenswrapper[4932]: I0321 09:50:52.703440 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:50:52 crc kubenswrapper[4932]: E0321 09:50:52.703776 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:50:52 crc kubenswrapper[4932]: E0321 09:50:52.703901 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:50:56 crc kubenswrapper[4932]: I0321 09:50:56.702420 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:50:56 crc kubenswrapper[4932]: E0321 09:50:56.702956 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:51:06 crc kubenswrapper[4932]: I0321 09:51:06.702574 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:51:06 crc kubenswrapper[4932]: I0321 09:51:06.703560 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:51:06 crc kubenswrapper[4932]: E0321 09:51:06.703789 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:51:06 crc kubenswrapper[4932]: E0321 09:51:06.703823 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:51:08 crc kubenswrapper[4932]: I0321 09:51:08.702752 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:51:08 crc kubenswrapper[4932]: E0321 09:51:08.703261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:51:17 crc kubenswrapper[4932]: I0321 09:51:17.710018 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:51:17 crc kubenswrapper[4932]: E0321 09:51:17.712467 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:51:19 crc kubenswrapper[4932]: I0321 09:51:19.703701 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:51:19 crc kubenswrapper[4932]: E0321 09:51:19.704388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:51:23 crc kubenswrapper[4932]: I0321 09:51:23.704540 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:51:23 crc kubenswrapper[4932]: E0321 09:51:23.706298 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:51:29 crc kubenswrapper[4932]: I0321 09:51:29.703544 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:51:29 crc kubenswrapper[4932]: E0321 09:51:29.704237 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:51:30 crc kubenswrapper[4932]: I0321 09:51:30.703403 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:51:30 crc kubenswrapper[4932]: E0321 09:51:30.704216 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:51:31 crc kubenswrapper[4932]: I0321 09:51:31.823505 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2cjpx"] Mar 21 09:51:31 crc kubenswrapper[4932]: E0321 09:51:31.824499 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ead2eb-b4fb-4a71-b94b-47e4b4159ff5" containerName="oc" Mar 21 09:51:31 crc kubenswrapper[4932]: I0321 09:51:31.824513 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ead2eb-b4fb-4a71-b94b-47e4b4159ff5" containerName="oc" Mar 21 09:51:31 crc kubenswrapper[4932]: I0321 09:51:31.824735 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ead2eb-b4fb-4a71-b94b-47e4b4159ff5" containerName="oc" Mar 21 09:51:31 crc kubenswrapper[4932]: I0321 09:51:31.826748 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:31 crc kubenswrapper[4932]: I0321 09:51:31.840957 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cjpx"] Mar 21 09:51:31 crc kubenswrapper[4932]: I0321 09:51:31.939068 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-utilities\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:31 crc kubenswrapper[4932]: I0321 09:51:31.939208 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-catalog-content\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:31 crc kubenswrapper[4932]: I0321 09:51:31.939237 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf4r5\" (UniqueName: \"kubernetes.io/projected/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-kube-api-access-lf4r5\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.016618 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vp8x5"] Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.020637 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.035521 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp8x5"] Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.040938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-utilities\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.041289 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-catalog-content\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.041379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf4r5\" (UniqueName: \"kubernetes.io/projected/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-kube-api-access-lf4r5\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.041424 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-utilities\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.041693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-catalog-content\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.073821 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf4r5\" (UniqueName: \"kubernetes.io/projected/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-kube-api-access-lf4r5\") pod \"redhat-operators-2cjpx\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.143365 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85fh\" (UniqueName: \"kubernetes.io/projected/705c4b89-fb39-423a-9a4d-92868d053495-kube-api-access-m85fh\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.143891 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-utilities\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.143911 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-catalog-content\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.162210 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.246871 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m85fh\" (UniqueName: \"kubernetes.io/projected/705c4b89-fb39-423a-9a4d-92868d053495-kube-api-access-m85fh\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.247028 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-utilities\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.247054 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-catalog-content\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.247764 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-catalog-content\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.247806 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-utilities\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.268236 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85fh\" (UniqueName: \"kubernetes.io/projected/705c4b89-fb39-423a-9a4d-92868d053495-kube-api-access-m85fh\") pod \"redhat-marketplace-vp8x5\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.341520 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.715244 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cjpx"] Mar 21 09:51:32 crc kubenswrapper[4932]: I0321 09:51:32.889927 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp8x5"] Mar 21 09:51:32 crc kubenswrapper[4932]: W0321 09:51:32.897045 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod705c4b89_fb39_423a_9a4d_92868d053495.slice/crio-3ad7ac02cb148064eea68fc9ae6c890af69bc5177c1e7b472560941e1844c6a2 WatchSource:0}: Error finding container 3ad7ac02cb148064eea68fc9ae6c890af69bc5177c1e7b472560941e1844c6a2: Status 404 returned error can't find the container with id 3ad7ac02cb148064eea68fc9ae6c890af69bc5177c1e7b472560941e1844c6a2 Mar 21 09:51:33 crc kubenswrapper[4932]: I0321 09:51:33.498685 4932 generic.go:334] "Generic (PLEG): container finished" podID="705c4b89-fb39-423a-9a4d-92868d053495" containerID="9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14" exitCode=0 Mar 21 09:51:33 crc kubenswrapper[4932]: I0321 09:51:33.498737 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp8x5" event={"ID":"705c4b89-fb39-423a-9a4d-92868d053495","Type":"ContainerDied","Data":"9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14"} Mar 21 09:51:33 crc kubenswrapper[4932]: I0321 09:51:33.498800 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp8x5" event={"ID":"705c4b89-fb39-423a-9a4d-92868d053495","Type":"ContainerStarted","Data":"3ad7ac02cb148064eea68fc9ae6c890af69bc5177c1e7b472560941e1844c6a2"} Mar 21 09:51:33 crc kubenswrapper[4932]: I0321 09:51:33.501497 4932 generic.go:334] "Generic (PLEG): container finished" podID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerID="f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1" exitCode=0 Mar 21 09:51:33 crc kubenswrapper[4932]: I0321 09:51:33.501938 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cjpx" event={"ID":"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155","Type":"ContainerDied","Data":"f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1"} Mar 21 09:51:33 crc kubenswrapper[4932]: I0321 09:51:33.502060 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cjpx" event={"ID":"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155","Type":"ContainerStarted","Data":"1fb3233e4f0b68e0c07a8f07d3f0270edb7e913953ca66eca66001001e26cd7f"} Mar 21 09:51:34 crc kubenswrapper[4932]: I0321 09:51:34.515259 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp8x5" event={"ID":"705c4b89-fb39-423a-9a4d-92868d053495","Type":"ContainerStarted","Data":"caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7"} Mar 21 09:51:35 crc kubenswrapper[4932]: I0321 09:51:35.524498 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cjpx" event={"ID":"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155","Type":"ContainerStarted","Data":"76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41"} Mar 21 09:51:35 crc kubenswrapper[4932]: I0321 09:51:35.526656 4932 generic.go:334] "Generic (PLEG): container finished" podID="705c4b89-fb39-423a-9a4d-92868d053495" containerID="caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7" exitCode=0 Mar 21 09:51:35 crc kubenswrapper[4932]: I0321 09:51:35.526745 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp8x5" event={"ID":"705c4b89-fb39-423a-9a4d-92868d053495","Type":"ContainerDied","Data":"caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7"} Mar 21 09:51:36 crc kubenswrapper[4932]: I0321 09:51:36.538806 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp8x5" event={"ID":"705c4b89-fb39-423a-9a4d-92868d053495","Type":"ContainerStarted","Data":"3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6"} Mar 21 09:51:38 crc kubenswrapper[4932]: I0321 09:51:38.558997 4932 generic.go:334] "Generic (PLEG): container finished" podID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerID="76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41" exitCode=0 Mar 21 09:51:38 crc kubenswrapper[4932]: I0321 09:51:38.559067 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cjpx" event={"ID":"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155","Type":"ContainerDied","Data":"76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41"} Mar 21 09:51:38 crc kubenswrapper[4932]: I0321 09:51:38.586836 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vp8x5" podStartSLOduration=5.096903374 podStartE2EDuration="7.586814684s" podCreationTimestamp="2026-03-21 09:51:31 +0000 UTC" firstStartedPulling="2026-03-21 09:51:33.500686272 +0000 UTC m=+3197.095884541" lastFinishedPulling="2026-03-21 09:51:35.990597582 +0000 UTC m=+3199.585795851" observedRunningTime="2026-03-21 09:51:36.563241346 +0000 UTC m=+3200.158439645" watchObservedRunningTime="2026-03-21 09:51:38.586814684 +0000 UTC m=+3202.182012953" Mar 21 09:51:38 crc kubenswrapper[4932]: I0321 09:51:38.701917 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:51:38 crc kubenswrapper[4932]: E0321 09:51:38.702192 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:51:39 crc kubenswrapper[4932]: I0321 09:51:39.571499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cjpx" event={"ID":"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155","Type":"ContainerStarted","Data":"79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c"} Mar 21 09:51:39 crc kubenswrapper[4932]: I0321 09:51:39.593841 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2cjpx" podStartSLOduration=3.153037546 podStartE2EDuration="8.593821008s" podCreationTimestamp="2026-03-21 09:51:31 +0000 UTC" firstStartedPulling="2026-03-21 09:51:33.503070647 +0000 UTC m=+3197.098268916" lastFinishedPulling="2026-03-21 09:51:38.943854109 +0000 UTC m=+3202.539052378" observedRunningTime="2026-03-21 09:51:39.589324217 +0000 UTC m=+3203.184522486" watchObservedRunningTime="2026-03-21 09:51:39.593821008 +0000 UTC m=+3203.189019277" Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.163117 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.163482 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.343004 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.343067 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.391927 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.641062 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.687172 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp8x5"] Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.703205 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:51:42 crc kubenswrapper[4932]: I0321 09:51:42.703289 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:51:42 crc kubenswrapper[4932]: E0321 09:51:42.703451 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:51:42 crc kubenswrapper[4932]: E0321 09:51:42.703578 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:51:43 crc kubenswrapper[4932]: I0321 09:51:43.213188 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2cjpx" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="registry-server" probeResult="failure" output=< Mar 21 09:51:43 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 09:51:43 crc kubenswrapper[4932]: > Mar 21 09:51:44 crc kubenswrapper[4932]: I0321 09:51:44.619594 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vp8x5" podUID="705c4b89-fb39-423a-9a4d-92868d053495" containerName="registry-server" containerID="cri-o://3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6" gracePeriod=2 Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.095991 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.233849 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-catalog-content\") pod \"705c4b89-fb39-423a-9a4d-92868d053495\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.233999 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-utilities\") pod \"705c4b89-fb39-423a-9a4d-92868d053495\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.234044 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m85fh\" (UniqueName: \"kubernetes.io/projected/705c4b89-fb39-423a-9a4d-92868d053495-kube-api-access-m85fh\") pod \"705c4b89-fb39-423a-9a4d-92868d053495\" (UID: \"705c4b89-fb39-423a-9a4d-92868d053495\") " Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.235669 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-utilities" (OuterVolumeSpecName: "utilities") pod "705c4b89-fb39-423a-9a4d-92868d053495" (UID: "705c4b89-fb39-423a-9a4d-92868d053495"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.239703 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705c4b89-fb39-423a-9a4d-92868d053495-kube-api-access-m85fh" (OuterVolumeSpecName: "kube-api-access-m85fh") pod "705c4b89-fb39-423a-9a4d-92868d053495" (UID: "705c4b89-fb39-423a-9a4d-92868d053495"). InnerVolumeSpecName "kube-api-access-m85fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.261305 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "705c4b89-fb39-423a-9a4d-92868d053495" (UID: "705c4b89-fb39-423a-9a4d-92868d053495"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.336027 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.336247 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705c4b89-fb39-423a-9a4d-92868d053495-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.336257 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m85fh\" (UniqueName: \"kubernetes.io/projected/705c4b89-fb39-423a-9a4d-92868d053495-kube-api-access-m85fh\") on node \"crc\" DevicePath \"\"" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.633033 4932 generic.go:334] "Generic (PLEG): container finished" podID="705c4b89-fb39-423a-9a4d-92868d053495" containerID="3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6" exitCode=0 Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.633082 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp8x5" event={"ID":"705c4b89-fb39-423a-9a4d-92868d053495","Type":"ContainerDied","Data":"3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6"} Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.633110 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vp8x5" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.633125 4932 scope.go:117] "RemoveContainer" containerID="3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.633113 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vp8x5" event={"ID":"705c4b89-fb39-423a-9a4d-92868d053495","Type":"ContainerDied","Data":"3ad7ac02cb148064eea68fc9ae6c890af69bc5177c1e7b472560941e1844c6a2"} Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.657398 4932 scope.go:117] "RemoveContainer" containerID="caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.683549 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp8x5"] Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.692636 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vp8x5"] Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.707947 4932 scope.go:117] "RemoveContainer" containerID="9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.721171 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705c4b89-fb39-423a-9a4d-92868d053495" path="/var/lib/kubelet/pods/705c4b89-fb39-423a-9a4d-92868d053495/volumes" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.746800 4932 scope.go:117] "RemoveContainer" containerID="3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6" Mar 21 09:51:45 crc kubenswrapper[4932]: E0321 09:51:45.747961 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6\": container with ID starting with 3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6 not found: ID does not exist" containerID="3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.748048 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6"} err="failed to get container status \"3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6\": rpc error: code = NotFound desc = could not find container \"3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6\": container with ID starting with 3ff6aed412eca2a174f52fa23e7ab435712d341af202018add2176263aa127c6 not found: ID does not exist" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.748110 4932 scope.go:117] "RemoveContainer" containerID="caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7" Mar 21 09:51:45 crc kubenswrapper[4932]: E0321 09:51:45.748912 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7\": container with ID starting with caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7 not found: ID does not exist" containerID="caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.749026 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7"} err="failed to get container status \"caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7\": rpc error: code = NotFound desc = could not find container \"caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7\": container with ID starting with caf8fb233b45d0108b0631d246995162443bd23f0b916e6d4d5aa2cc600508a7 not found: ID does not exist" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.749163 4932 scope.go:117] "RemoveContainer" containerID="9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14" Mar 21 09:51:45 crc kubenswrapper[4932]: E0321 09:51:45.749830 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14\": container with ID starting with 9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14 not found: ID does not exist" containerID="9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14" Mar 21 09:51:45 crc kubenswrapper[4932]: I0321 09:51:45.749927 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14"} err="failed to get container status \"9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14\": rpc error: code = NotFound desc = could not find container \"9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14\": container with ID starting with 9912482ff3370b0c1abf15a445e3096a0aef07163a7c674c09d7b3e0333c9f14 not found: ID does not exist" Mar 21 09:51:50 crc kubenswrapper[4932]: I0321 09:51:50.702115 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:51:50 crc kubenswrapper[4932]: E0321 09:51:50.703086 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:51:52 crc kubenswrapper[4932]: I0321 09:51:52.216391 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:52 crc kubenswrapper[4932]: I0321 09:51:52.267662 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:52 crc kubenswrapper[4932]: I0321 09:51:52.456113 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cjpx"] Mar 21 09:51:53 crc kubenswrapper[4932]: I0321 09:51:53.702437 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:51:53 crc kubenswrapper[4932]: E0321 09:51:53.702907 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:51:53 crc kubenswrapper[4932]: I0321 09:51:53.703375 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:51:53 crc kubenswrapper[4932]: E0321 09:51:53.703583 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:51:53 crc kubenswrapper[4932]: I0321 09:51:53.708165 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2cjpx" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="registry-server" containerID="cri-o://79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c" gracePeriod=2 Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.174786 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.330037 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf4r5\" (UniqueName: \"kubernetes.io/projected/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-kube-api-access-lf4r5\") pod \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.330088 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-utilities\") pod \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.330938 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-utilities" (OuterVolumeSpecName: "utilities") pod "afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" (UID: "afd6c5b9-4378-4ce6-9fb1-eb27e33c1155"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.331041 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-catalog-content\") pod \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\" (UID: \"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155\") " Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.332003 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.335913 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-kube-api-access-lf4r5" (OuterVolumeSpecName: "kube-api-access-lf4r5") pod "afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" (UID: "afd6c5b9-4378-4ce6-9fb1-eb27e33c1155"). InnerVolumeSpecName "kube-api-access-lf4r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.434797 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf4r5\" (UniqueName: \"kubernetes.io/projected/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-kube-api-access-lf4r5\") on node \"crc\" DevicePath \"\"" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.464022 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" (UID: "afd6c5b9-4378-4ce6-9fb1-eb27e33c1155"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.537226 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.723509 4932 generic.go:334] "Generic (PLEG): container finished" podID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerID="79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c" exitCode=0 Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.723566 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cjpx" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.723552 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cjpx" event={"ID":"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155","Type":"ContainerDied","Data":"79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c"} Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.723678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cjpx" event={"ID":"afd6c5b9-4378-4ce6-9fb1-eb27e33c1155","Type":"ContainerDied","Data":"1fb3233e4f0b68e0c07a8f07d3f0270edb7e913953ca66eca66001001e26cd7f"} Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.723698 4932 scope.go:117] "RemoveContainer" containerID="79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.748265 4932 scope.go:117] "RemoveContainer" containerID="76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.766994 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cjpx"] Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.777737 4932 scope.go:117] "RemoveContainer" containerID="f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.778134 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2cjpx"] Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.818790 4932 scope.go:117] "RemoveContainer" containerID="79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c" Mar 21 09:51:54 crc kubenswrapper[4932]: E0321 09:51:54.819226 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c\": container with ID starting with 79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c not found: ID does not exist" containerID="79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.819258 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c"} err="failed to get container status \"79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c\": rpc error: code = NotFound desc = could not find container \"79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c\": container with ID starting with 79055cb8e80fb003993132169c700b88b6b2f5b3e15b0a4d0d00659deefcf78c not found: ID does not exist" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.819281 4932 scope.go:117] "RemoveContainer" containerID="76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41" Mar 21 09:51:54 crc kubenswrapper[4932]: E0321 09:51:54.819880 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41\": container with ID starting with 76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41 not found: ID does not exist" containerID="76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.819952 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41"} err="failed to get container status \"76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41\": rpc error: code = NotFound desc = could not find container \"76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41\": container with ID starting with 76dca7abfe960282c0721ac8d4db50ad59be34b5fb047b965306d0fa89ad9f41 not found: ID does not exist" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.819968 4932 scope.go:117] "RemoveContainer" containerID="f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1" Mar 21 09:51:54 crc kubenswrapper[4932]: E0321 09:51:54.820251 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1\": container with ID starting with f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1 not found: ID does not exist" containerID="f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1" Mar 21 09:51:54 crc kubenswrapper[4932]: I0321 09:51:54.820274 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1"} err="failed to get container status \"f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1\": rpc error: code = NotFound desc = could not find container \"f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1\": container with ID starting with f16a45c4005eec21a7f1139724c5aff55066f3354e45f483aeb3a82b00506df1 not found: ID does not exist" Mar 21 09:51:55 crc kubenswrapper[4932]: I0321 09:51:55.722413 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" path="/var/lib/kubelet/pods/afd6c5b9-4378-4ce6-9fb1-eb27e33c1155/volumes" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.143576 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568112-7wpck"] Mar 21 09:52:00 crc kubenswrapper[4932]: E0321 09:52:00.146929 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="extract-utilities" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.147154 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="extract-utilities" Mar 21 09:52:00 crc kubenswrapper[4932]: E0321 09:52:00.147344 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705c4b89-fb39-423a-9a4d-92868d053495" containerName="extract-utilities" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.147625 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="705c4b89-fb39-423a-9a4d-92868d053495" containerName="extract-utilities" Mar 21 09:52:00 crc kubenswrapper[4932]: E0321 09:52:00.147840 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705c4b89-fb39-423a-9a4d-92868d053495" containerName="extract-content" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.148001 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="705c4b89-fb39-423a-9a4d-92868d053495" containerName="extract-content" Mar 21 09:52:00 crc kubenswrapper[4932]: E0321 09:52:00.148204 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="registry-server" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.148390 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="registry-server" Mar 21 09:52:00 crc kubenswrapper[4932]: E0321 09:52:00.148568 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="extract-content" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.148744 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="extract-content" Mar 21 09:52:00 crc kubenswrapper[4932]: E0321 09:52:00.148928 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705c4b89-fb39-423a-9a4d-92868d053495" containerName="registry-server" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.149094 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="705c4b89-fb39-423a-9a4d-92868d053495" containerName="registry-server" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.149653 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd6c5b9-4378-4ce6-9fb1-eb27e33c1155" containerName="registry-server" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.149839 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="705c4b89-fb39-423a-9a4d-92868d053495" containerName="registry-server" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.151243 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568112-7wpck" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.154592 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568112-7wpck"] Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.154717 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.154626 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.155467 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.270634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwrw6\" (UniqueName: \"kubernetes.io/projected/03ff4199-5b8d-47de-a44d-7c6dce5f7f89-kube-api-access-lwrw6\") pod \"auto-csr-approver-29568112-7wpck\" (UID: \"03ff4199-5b8d-47de-a44d-7c6dce5f7f89\") " pod="openshift-infra/auto-csr-approver-29568112-7wpck" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.373547 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwrw6\" (UniqueName: \"kubernetes.io/projected/03ff4199-5b8d-47de-a44d-7c6dce5f7f89-kube-api-access-lwrw6\") pod \"auto-csr-approver-29568112-7wpck\" (UID: \"03ff4199-5b8d-47de-a44d-7c6dce5f7f89\") " pod="openshift-infra/auto-csr-approver-29568112-7wpck" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.400111 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwrw6\" (UniqueName: \"kubernetes.io/projected/03ff4199-5b8d-47de-a44d-7c6dce5f7f89-kube-api-access-lwrw6\") pod \"auto-csr-approver-29568112-7wpck\" (UID: \"03ff4199-5b8d-47de-a44d-7c6dce5f7f89\") " pod="openshift-infra/auto-csr-approver-29568112-7wpck" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.480723 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568112-7wpck" Mar 21 09:52:00 crc kubenswrapper[4932]: I0321 09:52:00.959434 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568112-7wpck"] Mar 21 09:52:01 crc kubenswrapper[4932]: I0321 09:52:01.797767 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568112-7wpck" event={"ID":"03ff4199-5b8d-47de-a44d-7c6dce5f7f89","Type":"ContainerStarted","Data":"b26c889209669e8b16d1e23a31e41c30c393bb61591b59d8d2204cb79ee1d405"} Mar 21 09:52:02 crc kubenswrapper[4932]: I0321 09:52:02.810362 4932 generic.go:334] "Generic (PLEG): container finished" podID="03ff4199-5b8d-47de-a44d-7c6dce5f7f89" containerID="70815e032a0a774eccb40e1777f4ad12ef67dafa3b0b6cd5b7131231989b7d09" exitCode=0 Mar 21 09:52:02 crc kubenswrapper[4932]: I0321 09:52:02.810470 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568112-7wpck" event={"ID":"03ff4199-5b8d-47de-a44d-7c6dce5f7f89","Type":"ContainerDied","Data":"70815e032a0a774eccb40e1777f4ad12ef67dafa3b0b6cd5b7131231989b7d09"} Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.249146 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568112-7wpck" Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.374208 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwrw6\" (UniqueName: \"kubernetes.io/projected/03ff4199-5b8d-47de-a44d-7c6dce5f7f89-kube-api-access-lwrw6\") pod \"03ff4199-5b8d-47de-a44d-7c6dce5f7f89\" (UID: \"03ff4199-5b8d-47de-a44d-7c6dce5f7f89\") " Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.389155 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ff4199-5b8d-47de-a44d-7c6dce5f7f89-kube-api-access-lwrw6" (OuterVolumeSpecName: "kube-api-access-lwrw6") pod "03ff4199-5b8d-47de-a44d-7c6dce5f7f89" (UID: "03ff4199-5b8d-47de-a44d-7c6dce5f7f89"). InnerVolumeSpecName "kube-api-access-lwrw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.477021 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwrw6\" (UniqueName: \"kubernetes.io/projected/03ff4199-5b8d-47de-a44d-7c6dce5f7f89-kube-api-access-lwrw6\") on node \"crc\" DevicePath \"\"" Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.703662 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.703805 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:52:04 crc kubenswrapper[4932]: E0321 09:52:04.704049 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.704108 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:52:04 crc kubenswrapper[4932]: E0321 09:52:04.704160 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:52:04 crc kubenswrapper[4932]: E0321 09:52:04.704367 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.836588 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568112-7wpck" event={"ID":"03ff4199-5b8d-47de-a44d-7c6dce5f7f89","Type":"ContainerDied","Data":"b26c889209669e8b16d1e23a31e41c30c393bb61591b59d8d2204cb79ee1d405"} Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.836646 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26c889209669e8b16d1e23a31e41c30c393bb61591b59d8d2204cb79ee1d405" Mar 21 09:52:04 crc kubenswrapper[4932]: I0321 09:52:04.836673 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568112-7wpck" Mar 21 09:52:05 crc kubenswrapper[4932]: I0321 09:52:05.316634 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568106-rj7tb"] Mar 21 09:52:05 crc kubenswrapper[4932]: I0321 09:52:05.324732 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568106-rj7tb"] Mar 21 09:52:05 crc kubenswrapper[4932]: I0321 09:52:05.723321 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4c443e-2973-454d-8782-b8c03b70bf32" path="/var/lib/kubelet/pods/9d4c443e-2973-454d-8782-b8c03b70bf32/volumes" Mar 21 09:52:15 crc kubenswrapper[4932]: I0321 09:52:15.702429 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:52:15 crc kubenswrapper[4932]: E0321 09:52:15.703229 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:52:17 crc kubenswrapper[4932]: I0321 09:52:17.709817 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:52:17 crc kubenswrapper[4932]: I0321 09:52:17.988837 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d"} Mar 21 09:52:18 crc kubenswrapper[4932]: I0321 09:52:18.701983 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:52:18 crc kubenswrapper[4932]: E0321 09:52:18.702284 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:52:26 crc kubenswrapper[4932]: I0321 09:52:26.061754 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" exitCode=1 Mar 21 09:52:26 crc kubenswrapper[4932]: I0321 09:52:26.061780 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d"} Mar 21 09:52:26 crc kubenswrapper[4932]: I0321 09:52:26.062283 4932 scope.go:117] "RemoveContainer" containerID="a3c5d025eb26994e81e353afc3dc8adf7272c526617c861cc2c365221106310b" Mar 21 09:52:26 crc kubenswrapper[4932]: I0321 09:52:26.063261 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:52:26 crc kubenswrapper[4932]: E0321 09:52:26.064093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:52:27 crc kubenswrapper[4932]: I0321 09:52:27.741040 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:52:27 crc kubenswrapper[4932]: I0321 09:52:27.742430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:52:27 crc kubenswrapper[4932]: I0321 09:52:27.742467 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:52:27 crc kubenswrapper[4932]: I0321 09:52:27.742489 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:52:27 crc kubenswrapper[4932]: I0321 09:52:27.744594 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:52:27 crc kubenswrapper[4932]: E0321 09:52:27.745318 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:52:29 crc kubenswrapper[4932]: I0321 09:52:29.702811 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:52:29 crc kubenswrapper[4932]: E0321 09:52:29.703555 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:52:32 crc kubenswrapper[4932]: I0321 09:52:32.580482 4932 scope.go:117] "RemoveContainer" containerID="33b8d621d18fb42819300ab00a9022784d81eb52590fee83cababbb78322becb" Mar 21 09:52:32 crc kubenswrapper[4932]: I0321 09:52:32.704039 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:52:33 crc kubenswrapper[4932]: I0321 09:52:33.130793 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f"} Mar 21 09:52:37 crc kubenswrapper[4932]: I0321 09:52:37.948192 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:52:37 crc kubenswrapper[4932]: I0321 09:52:37.948755 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:52:40 crc kubenswrapper[4932]: I0321 09:52:40.702505 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:52:40 crc kubenswrapper[4932]: E0321 09:52:40.703145 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:52:40 crc kubenswrapper[4932]: I0321 09:52:40.703800 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:52:40 crc kubenswrapper[4932]: E0321 09:52:40.704373 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:52:41 crc kubenswrapper[4932]: I0321 09:52:41.214571 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" exitCode=1 Mar 21 09:52:41 crc kubenswrapper[4932]: I0321 09:52:41.214623 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f"} Mar 21 09:52:41 crc kubenswrapper[4932]: I0321 09:52:41.214659 4932 scope.go:117] "RemoveContainer" containerID="34880ba9524868698848be6652847362bc47f5bce78378078b7a8e1bb3b9da83" Mar 21 09:52:41 crc kubenswrapper[4932]: I0321 09:52:41.215229 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:52:41 crc kubenswrapper[4932]: E0321 09:52:41.215497 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:52:47 crc kubenswrapper[4932]: I0321 09:52:47.947876 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:52:47 crc kubenswrapper[4932]: I0321 09:52:47.948489 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:52:47 crc kubenswrapper[4932]: I0321 09:52:47.949283 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:52:47 crc kubenswrapper[4932]: E0321 09:52:47.949543 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:52:55 crc kubenswrapper[4932]: I0321 09:52:55.702842 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:52:55 crc kubenswrapper[4932]: I0321 09:52:55.703162 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:52:55 crc kubenswrapper[4932]: E0321 09:52:55.703401 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:52:55 crc kubenswrapper[4932]: E0321 09:52:55.703440 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:52:58 crc kubenswrapper[4932]: I0321 09:52:58.702958 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:52:58 crc kubenswrapper[4932]: E0321 09:52:58.703674 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:53:06 crc kubenswrapper[4932]: I0321 09:53:06.702790 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:53:06 crc kubenswrapper[4932]: E0321 09:53:06.703539 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:53:10 crc kubenswrapper[4932]: I0321 09:53:10.703027 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:53:10 crc kubenswrapper[4932]: I0321 09:53:10.704323 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:53:10 crc kubenswrapper[4932]: E0321 09:53:10.704681 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:53:10 crc kubenswrapper[4932]: E0321 09:53:10.704887 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:53:21 crc kubenswrapper[4932]: I0321 09:53:21.703491 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:53:21 crc kubenswrapper[4932]: E0321 09:53:21.704822 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:53:22 crc kubenswrapper[4932]: I0321 09:53:22.702644 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:53:22 crc kubenswrapper[4932]: E0321 09:53:22.702931 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 09:53:24 crc kubenswrapper[4932]: I0321 09:53:24.702509 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:53:24 crc kubenswrapper[4932]: E0321 09:53:24.703052 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:53:35 crc kubenswrapper[4932]: I0321 09:53:35.702951 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:53:35 crc kubenswrapper[4932]: I0321 09:53:35.704056 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:53:35 crc kubenswrapper[4932]: E0321 09:53:35.704296 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:53:35 crc kubenswrapper[4932]: I0321 09:53:35.954112 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"fc77fa1ae642d9a4d6ebcb624085a9e1b7e1b750395461e09bf63b406f7f53d9"} Mar 21 09:53:36 crc kubenswrapper[4932]: I0321 09:53:36.703162 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:53:36 crc kubenswrapper[4932]: E0321 09:53:36.703786 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:53:46 crc kubenswrapper[4932]: I0321 09:53:46.702339 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:53:46 crc kubenswrapper[4932]: E0321 09:53:46.703124 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:53:50 crc kubenswrapper[4932]: I0321 09:53:50.702020 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:53:50 crc kubenswrapper[4932]: E0321 09:53:50.703463 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.174549 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568114-s6zcx"] Mar 21 09:54:00 crc kubenswrapper[4932]: E0321 09:54:00.175476 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ff4199-5b8d-47de-a44d-7c6dce5f7f89" containerName="oc" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.175492 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ff4199-5b8d-47de-a44d-7c6dce5f7f89" containerName="oc" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.175683 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ff4199-5b8d-47de-a44d-7c6dce5f7f89" containerName="oc" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.176389 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568114-s6zcx" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.179135 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.180588 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.180685 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.190824 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568114-s6zcx"] Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.235773 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vn2q\" (UniqueName: \"kubernetes.io/projected/52e8f5b2-8558-4c32-b5ab-b329e722b09b-kube-api-access-4vn2q\") pod \"auto-csr-approver-29568114-s6zcx\" (UID: \"52e8f5b2-8558-4c32-b5ab-b329e722b09b\") " pod="openshift-infra/auto-csr-approver-29568114-s6zcx" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.338397 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vn2q\" (UniqueName: \"kubernetes.io/projected/52e8f5b2-8558-4c32-b5ab-b329e722b09b-kube-api-access-4vn2q\") pod \"auto-csr-approver-29568114-s6zcx\" (UID: \"52e8f5b2-8558-4c32-b5ab-b329e722b09b\") " pod="openshift-infra/auto-csr-approver-29568114-s6zcx" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.359921 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vn2q\" (UniqueName: \"kubernetes.io/projected/52e8f5b2-8558-4c32-b5ab-b329e722b09b-kube-api-access-4vn2q\") pod \"auto-csr-approver-29568114-s6zcx\" (UID: \"52e8f5b2-8558-4c32-b5ab-b329e722b09b\") " pod="openshift-infra/auto-csr-approver-29568114-s6zcx" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.494027 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568114-s6zcx" Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.960742 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568114-s6zcx"] Mar 21 09:54:00 crc kubenswrapper[4932]: I0321 09:54:00.966159 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:54:01 crc kubenswrapper[4932]: I0321 09:54:01.187726 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568114-s6zcx" event={"ID":"52e8f5b2-8558-4c32-b5ab-b329e722b09b","Type":"ContainerStarted","Data":"e5a401b05f6e0ac4032116548f01ed685bf0d051ad9ae141db1aef296c57bfd9"} Mar 21 09:54:01 crc kubenswrapper[4932]: I0321 09:54:01.702209 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:54:01 crc kubenswrapper[4932]: E0321 09:54:01.702440 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:54:03 crc kubenswrapper[4932]: I0321 09:54:03.204599 4932 generic.go:334] "Generic (PLEG): container finished" podID="52e8f5b2-8558-4c32-b5ab-b329e722b09b" containerID="1f3662e646d60a76b9473fd7f65376f3faba5939981c389aac057bd55466effd" exitCode=0 Mar 21 09:54:03 crc kubenswrapper[4932]: I0321 09:54:03.204708 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568114-s6zcx" event={"ID":"52e8f5b2-8558-4c32-b5ab-b329e722b09b","Type":"ContainerDied","Data":"1f3662e646d60a76b9473fd7f65376f3faba5939981c389aac057bd55466effd"} Mar 21 09:54:04 crc kubenswrapper[4932]: I0321 09:54:04.565060 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568114-s6zcx" Mar 21 09:54:04 crc kubenswrapper[4932]: I0321 09:54:04.722756 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vn2q\" (UniqueName: \"kubernetes.io/projected/52e8f5b2-8558-4c32-b5ab-b329e722b09b-kube-api-access-4vn2q\") pod \"52e8f5b2-8558-4c32-b5ab-b329e722b09b\" (UID: \"52e8f5b2-8558-4c32-b5ab-b329e722b09b\") " Mar 21 09:54:04 crc kubenswrapper[4932]: I0321 09:54:04.728615 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52e8f5b2-8558-4c32-b5ab-b329e722b09b-kube-api-access-4vn2q" (OuterVolumeSpecName: "kube-api-access-4vn2q") pod "52e8f5b2-8558-4c32-b5ab-b329e722b09b" (UID: "52e8f5b2-8558-4c32-b5ab-b329e722b09b"). InnerVolumeSpecName "kube-api-access-4vn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:54:04 crc kubenswrapper[4932]: I0321 09:54:04.826132 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vn2q\" (UniqueName: \"kubernetes.io/projected/52e8f5b2-8558-4c32-b5ab-b329e722b09b-kube-api-access-4vn2q\") on node \"crc\" DevicePath \"\"" Mar 21 09:54:05 crc kubenswrapper[4932]: I0321 09:54:05.224084 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568114-s6zcx" event={"ID":"52e8f5b2-8558-4c32-b5ab-b329e722b09b","Type":"ContainerDied","Data":"e5a401b05f6e0ac4032116548f01ed685bf0d051ad9ae141db1aef296c57bfd9"} Mar 21 09:54:05 crc kubenswrapper[4932]: I0321 09:54:05.224132 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568114-s6zcx" Mar 21 09:54:05 crc kubenswrapper[4932]: I0321 09:54:05.224152 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a401b05f6e0ac4032116548f01ed685bf0d051ad9ae141db1aef296c57bfd9" Mar 21 09:54:05 crc kubenswrapper[4932]: I0321 09:54:05.639540 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568108-n282s"] Mar 21 09:54:05 crc kubenswrapper[4932]: I0321 09:54:05.648566 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568108-n282s"] Mar 21 09:54:05 crc kubenswrapper[4932]: I0321 09:54:05.703228 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:54:05 crc kubenswrapper[4932]: E0321 09:54:05.703485 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:54:05 crc kubenswrapper[4932]: I0321 09:54:05.714631 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32688360-089e-4505-88a5-ea029c27868b" path="/var/lib/kubelet/pods/32688360-089e-4505-88a5-ea029c27868b/volumes" Mar 21 09:54:16 crc kubenswrapper[4932]: I0321 09:54:16.703117 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:54:16 crc kubenswrapper[4932]: E0321 09:54:16.704120 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:54:20 crc kubenswrapper[4932]: I0321 09:54:20.704321 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:54:20 crc kubenswrapper[4932]: E0321 09:54:20.704983 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:54:30 crc kubenswrapper[4932]: I0321 09:54:30.703502 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:54:30 crc kubenswrapper[4932]: E0321 09:54:30.704553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:54:31 crc kubenswrapper[4932]: I0321 09:54:31.702869 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:54:31 crc kubenswrapper[4932]: E0321 09:54:31.703128 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:54:32 crc kubenswrapper[4932]: I0321 09:54:32.728517 4932 scope.go:117] "RemoveContainer" containerID="615a99fd154d867acd636b429f81db7b78c1d3de3e419dc742aacb7f6c37eb30" Mar 21 09:54:43 crc kubenswrapper[4932]: I0321 09:54:43.702655 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:54:43 crc kubenswrapper[4932]: E0321 09:54:43.703524 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:54:44 crc kubenswrapper[4932]: I0321 09:54:44.703284 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:54:44 crc kubenswrapper[4932]: E0321 09:54:44.704107 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:54:55 crc kubenswrapper[4932]: I0321 09:54:55.702334 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:54:55 crc kubenswrapper[4932]: E0321 09:54:55.703069 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:54:59 crc kubenswrapper[4932]: I0321 09:54:59.703035 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:54:59 crc kubenswrapper[4932]: E0321 09:54:59.703835 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:55:09 crc kubenswrapper[4932]: I0321 09:55:09.703024 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:55:09 crc kubenswrapper[4932]: E0321 09:55:09.703634 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:55:13 crc kubenswrapper[4932]: I0321 09:55:13.703159 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:55:13 crc kubenswrapper[4932]: E0321 09:55:13.704787 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:55:21 crc kubenswrapper[4932]: I0321 09:55:21.703379 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:55:21 crc kubenswrapper[4932]: E0321 09:55:21.704702 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:55:24 crc kubenswrapper[4932]: I0321 09:55:24.703821 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:55:24 crc kubenswrapper[4932]: E0321 09:55:24.705004 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:55:32 crc kubenswrapper[4932]: I0321 09:55:32.702871 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:55:32 crc kubenswrapper[4932]: E0321 09:55:32.703925 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:55:35 crc kubenswrapper[4932]: I0321 09:55:35.705713 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:55:35 crc kubenswrapper[4932]: E0321 09:55:35.706844 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:55:46 crc kubenswrapper[4932]: I0321 09:55:46.702748 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:55:46 crc kubenswrapper[4932]: E0321 09:55:46.703392 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:55:48 crc kubenswrapper[4932]: I0321 09:55:48.702279 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:55:48 crc kubenswrapper[4932]: E0321 09:55:48.702750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:55:59 crc kubenswrapper[4932]: I0321 09:55:59.703541 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:55:59 crc kubenswrapper[4932]: E0321 09:55:59.704879 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.153756 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568116-95rq5"] Mar 21 09:56:00 crc kubenswrapper[4932]: E0321 09:56:00.154388 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e8f5b2-8558-4c32-b5ab-b329e722b09b" containerName="oc" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.154408 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e8f5b2-8558-4c32-b5ab-b329e722b09b" containerName="oc" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.154707 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="52e8f5b2-8558-4c32-b5ab-b329e722b09b" containerName="oc" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.155525 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568116-95rq5" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.157635 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.157846 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.158481 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.172139 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568116-95rq5"] Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.183888 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdk4m\" (UniqueName: \"kubernetes.io/projected/f33b86a9-169e-43e0-9d21-ecc4aaf17f8b-kube-api-access-vdk4m\") pod \"auto-csr-approver-29568116-95rq5\" (UID: \"f33b86a9-169e-43e0-9d21-ecc4aaf17f8b\") " pod="openshift-infra/auto-csr-approver-29568116-95rq5" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.226105 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.226183 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.286267 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdk4m\" (UniqueName: \"kubernetes.io/projected/f33b86a9-169e-43e0-9d21-ecc4aaf17f8b-kube-api-access-vdk4m\") pod \"auto-csr-approver-29568116-95rq5\" (UID: \"f33b86a9-169e-43e0-9d21-ecc4aaf17f8b\") " pod="openshift-infra/auto-csr-approver-29568116-95rq5" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.317984 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdk4m\" (UniqueName: \"kubernetes.io/projected/f33b86a9-169e-43e0-9d21-ecc4aaf17f8b-kube-api-access-vdk4m\") pod \"auto-csr-approver-29568116-95rq5\" (UID: \"f33b86a9-169e-43e0-9d21-ecc4aaf17f8b\") " pod="openshift-infra/auto-csr-approver-29568116-95rq5" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.487779 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568116-95rq5" Mar 21 09:56:00 crc kubenswrapper[4932]: I0321 09:56:00.930561 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568116-95rq5"] Mar 21 09:56:00 crc kubenswrapper[4932]: W0321 09:56:00.933371 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf33b86a9_169e_43e0_9d21_ecc4aaf17f8b.slice/crio-9e81dac5b2d212972b0f1f4b0f86de01d584c847a5059b482a3e62cd262cc226 WatchSource:0}: Error finding container 9e81dac5b2d212972b0f1f4b0f86de01d584c847a5059b482a3e62cd262cc226: Status 404 returned error can't find the container with id 9e81dac5b2d212972b0f1f4b0f86de01d584c847a5059b482a3e62cd262cc226 Mar 21 09:56:01 crc kubenswrapper[4932]: I0321 09:56:01.703927 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:56:01 crc kubenswrapper[4932]: E0321 09:56:01.704269 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:56:01 crc kubenswrapper[4932]: I0321 09:56:01.852775 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568116-95rq5" event={"ID":"f33b86a9-169e-43e0-9d21-ecc4aaf17f8b","Type":"ContainerStarted","Data":"9e81dac5b2d212972b0f1f4b0f86de01d584c847a5059b482a3e62cd262cc226"} Mar 21 09:56:02 crc kubenswrapper[4932]: I0321 09:56:02.868720 4932 generic.go:334] "Generic (PLEG): container finished" podID="f33b86a9-169e-43e0-9d21-ecc4aaf17f8b" containerID="fd04c6cd184e1053c36251806152f55dc58699a11aad8745a7eadcb2ff33cfcf" exitCode=0 Mar 21 09:56:02 crc kubenswrapper[4932]: I0321 09:56:02.868785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568116-95rq5" event={"ID":"f33b86a9-169e-43e0-9d21-ecc4aaf17f8b","Type":"ContainerDied","Data":"fd04c6cd184e1053c36251806152f55dc58699a11aad8745a7eadcb2ff33cfcf"} Mar 21 09:56:04 crc kubenswrapper[4932]: I0321 09:56:04.258312 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568116-95rq5" Mar 21 09:56:04 crc kubenswrapper[4932]: I0321 09:56:04.280194 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdk4m\" (UniqueName: \"kubernetes.io/projected/f33b86a9-169e-43e0-9d21-ecc4aaf17f8b-kube-api-access-vdk4m\") pod \"f33b86a9-169e-43e0-9d21-ecc4aaf17f8b\" (UID: \"f33b86a9-169e-43e0-9d21-ecc4aaf17f8b\") " Mar 21 09:56:04 crc kubenswrapper[4932]: I0321 09:56:04.288051 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33b86a9-169e-43e0-9d21-ecc4aaf17f8b-kube-api-access-vdk4m" (OuterVolumeSpecName: "kube-api-access-vdk4m") pod "f33b86a9-169e-43e0-9d21-ecc4aaf17f8b" (UID: "f33b86a9-169e-43e0-9d21-ecc4aaf17f8b"). InnerVolumeSpecName "kube-api-access-vdk4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:56:04 crc kubenswrapper[4932]: I0321 09:56:04.382781 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdk4m\" (UniqueName: \"kubernetes.io/projected/f33b86a9-169e-43e0-9d21-ecc4aaf17f8b-kube-api-access-vdk4m\") on node \"crc\" DevicePath \"\"" Mar 21 09:56:04 crc kubenswrapper[4932]: I0321 09:56:04.892826 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568116-95rq5" event={"ID":"f33b86a9-169e-43e0-9d21-ecc4aaf17f8b","Type":"ContainerDied","Data":"9e81dac5b2d212972b0f1f4b0f86de01d584c847a5059b482a3e62cd262cc226"} Mar 21 09:56:04 crc kubenswrapper[4932]: I0321 09:56:04.892870 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e81dac5b2d212972b0f1f4b0f86de01d584c847a5059b482a3e62cd262cc226" Mar 21 09:56:04 crc kubenswrapper[4932]: I0321 09:56:04.892879 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568116-95rq5" Mar 21 09:56:05 crc kubenswrapper[4932]: I0321 09:56:05.336319 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568110-v6n9z"] Mar 21 09:56:05 crc kubenswrapper[4932]: I0321 09:56:05.343394 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568110-v6n9z"] Mar 21 09:56:05 crc kubenswrapper[4932]: I0321 09:56:05.716038 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ead2eb-b4fb-4a71-b94b-47e4b4159ff5" path="/var/lib/kubelet/pods/58ead2eb-b4fb-4a71-b94b-47e4b4159ff5/volumes" Mar 21 09:56:12 crc kubenswrapper[4932]: I0321 09:56:12.702748 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:56:12 crc kubenswrapper[4932]: I0321 09:56:12.703395 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:56:12 crc kubenswrapper[4932]: E0321 09:56:12.703491 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:56:12 crc kubenswrapper[4932]: E0321 09:56:12.703617 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:56:23 crc kubenswrapper[4932]: I0321 09:56:23.702873 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:56:23 crc kubenswrapper[4932]: E0321 09:56:23.703655 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:56:27 crc kubenswrapper[4932]: I0321 09:56:27.709840 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:56:27 crc kubenswrapper[4932]: E0321 09:56:27.710759 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:56:30 crc kubenswrapper[4932]: I0321 09:56:30.226032 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:56:30 crc kubenswrapper[4932]: I0321 09:56:30.226446 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:56:32 crc kubenswrapper[4932]: I0321 09:56:32.820024 4932 scope.go:117] "RemoveContainer" containerID="2087aaab03b9fab9f6743d3efeef41b8f1e07d5790f3a9c3184ac8c9539e9336" Mar 21 09:56:36 crc kubenswrapper[4932]: I0321 09:56:36.702731 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:56:36 crc kubenswrapper[4932]: E0321 09:56:36.703832 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:56:40 crc kubenswrapper[4932]: I0321 09:56:40.702728 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:56:40 crc kubenswrapper[4932]: E0321 09:56:40.704042 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:56:48 crc kubenswrapper[4932]: I0321 09:56:48.702312 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:56:48 crc kubenswrapper[4932]: E0321 09:56:48.703325 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:56:51 crc kubenswrapper[4932]: I0321 09:56:51.702658 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:56:51 crc kubenswrapper[4932]: E0321 09:56:51.703546 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:57:00 crc kubenswrapper[4932]: I0321 09:57:00.225340 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:57:00 crc kubenswrapper[4932]: I0321 09:57:00.226126 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:57:00 crc kubenswrapper[4932]: I0321 09:57:00.226210 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 09:57:00 crc kubenswrapper[4932]: I0321 09:57:00.227552 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc77fa1ae642d9a4d6ebcb624085a9e1b7e1b750395461e09bf63b406f7f53d9"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 09:57:00 crc kubenswrapper[4932]: I0321 09:57:00.227665 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://fc77fa1ae642d9a4d6ebcb624085a9e1b7e1b750395461e09bf63b406f7f53d9" gracePeriod=600 Mar 21 09:57:00 crc kubenswrapper[4932]: I0321 09:57:00.448678 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="fc77fa1ae642d9a4d6ebcb624085a9e1b7e1b750395461e09bf63b406f7f53d9" exitCode=0 Mar 21 09:57:00 crc kubenswrapper[4932]: I0321 09:57:00.448769 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"fc77fa1ae642d9a4d6ebcb624085a9e1b7e1b750395461e09bf63b406f7f53d9"} Mar 21 09:57:00 crc kubenswrapper[4932]: I0321 09:57:00.449131 4932 scope.go:117] "RemoveContainer" containerID="580cf7bb576e48aa7cf175db8643708fa2c4701aa90415e692462578dec4a7f4" Mar 21 09:57:01 crc kubenswrapper[4932]: I0321 09:57:01.462499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765"} Mar 21 09:57:03 crc kubenswrapper[4932]: I0321 09:57:03.703285 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:57:03 crc kubenswrapper[4932]: E0321 09:57:03.704109 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:57:04 crc kubenswrapper[4932]: I0321 09:57:04.703663 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:57:04 crc kubenswrapper[4932]: E0321 09:57:04.704196 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:57:18 crc kubenswrapper[4932]: I0321 09:57:18.702429 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:57:18 crc kubenswrapper[4932]: E0321 09:57:18.703146 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:57:18 crc kubenswrapper[4932]: I0321 09:57:18.703662 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:57:18 crc kubenswrapper[4932]: E0321 09:57:18.704180 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:57:29 crc kubenswrapper[4932]: I0321 09:57:29.702809 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:57:29 crc kubenswrapper[4932]: E0321 09:57:29.703590 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:57:33 crc kubenswrapper[4932]: I0321 09:57:33.702429 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:57:34 crc kubenswrapper[4932]: I0321 09:57:34.786501 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6"} Mar 21 09:57:37 crc kubenswrapper[4932]: I0321 09:57:37.740568 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:57:37 crc kubenswrapper[4932]: I0321 09:57:37.741004 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:57:41 crc kubenswrapper[4932]: I0321 09:57:41.703391 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:57:41 crc kubenswrapper[4932]: I0321 09:57:41.847999 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" exitCode=1 Mar 21 09:57:41 crc kubenswrapper[4932]: I0321 09:57:41.848043 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6"} Mar 21 09:57:41 crc kubenswrapper[4932]: I0321 09:57:41.848086 4932 scope.go:117] "RemoveContainer" containerID="38ec14c6a2471e83262451b5e5cad9c9f8ae923bcbc51fbe025b2dbf4d41a22d" Mar 21 09:57:41 crc kubenswrapper[4932]: I0321 09:57:41.848933 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:57:41 crc kubenswrapper[4932]: E0321 09:57:41.849200 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:57:42 crc kubenswrapper[4932]: I0321 09:57:42.863206 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632"} Mar 21 09:57:47 crc kubenswrapper[4932]: I0321 09:57:47.741240 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:57:47 crc kubenswrapper[4932]: I0321 09:57:47.741831 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 09:57:47 crc kubenswrapper[4932]: I0321 09:57:47.742894 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:57:47 crc kubenswrapper[4932]: E0321 09:57:47.743115 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:57:47 crc kubenswrapper[4932]: I0321 09:57:47.947942 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:57:47 crc kubenswrapper[4932]: I0321 09:57:47.948210 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:57:49 crc kubenswrapper[4932]: I0321 09:57:49.933996 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" exitCode=1 Mar 21 09:57:49 crc kubenswrapper[4932]: I0321 09:57:49.934579 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632"} Mar 21 09:57:49 crc kubenswrapper[4932]: I0321 09:57:49.934770 4932 scope.go:117] "RemoveContainer" containerID="f4eb832b14cd8377c396c53467f54d68ce8b5d354c811867e6bfbefc81442d4f" Mar 21 09:57:49 crc kubenswrapper[4932]: I0321 09:57:49.936323 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:57:49 crc kubenswrapper[4932]: E0321 09:57:49.939457 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:57:57 crc kubenswrapper[4932]: I0321 09:57:57.948186 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:57:57 crc kubenswrapper[4932]: I0321 09:57:57.948851 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 09:57:57 crc kubenswrapper[4932]: I0321 09:57:57.949834 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:57:57 crc kubenswrapper[4932]: E0321 09:57:57.950052 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:57:59 crc kubenswrapper[4932]: I0321 09:57:59.702808 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:57:59 crc kubenswrapper[4932]: E0321 09:57:59.703327 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.139879 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568118-5t9gc"] Mar 21 09:58:00 crc kubenswrapper[4932]: E0321 09:58:00.140448 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33b86a9-169e-43e0-9d21-ecc4aaf17f8b" containerName="oc" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.140469 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33b86a9-169e-43e0-9d21-ecc4aaf17f8b" containerName="oc" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.140717 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33b86a9-169e-43e0-9d21-ecc4aaf17f8b" containerName="oc" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.141560 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.144322 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.144397 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.144620 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.149713 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568118-5t9gc"] Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.193727 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vt2j\" (UniqueName: \"kubernetes.io/projected/c5f253ff-12ab-4142-a9ee-4069b26afc64-kube-api-access-9vt2j\") pod \"auto-csr-approver-29568118-5t9gc\" (UID: \"c5f253ff-12ab-4142-a9ee-4069b26afc64\") " pod="openshift-infra/auto-csr-approver-29568118-5t9gc" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.298603 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vt2j\" (UniqueName: \"kubernetes.io/projected/c5f253ff-12ab-4142-a9ee-4069b26afc64-kube-api-access-9vt2j\") pod \"auto-csr-approver-29568118-5t9gc\" (UID: \"c5f253ff-12ab-4142-a9ee-4069b26afc64\") " pod="openshift-infra/auto-csr-approver-29568118-5t9gc" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.320066 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vt2j\" (UniqueName: \"kubernetes.io/projected/c5f253ff-12ab-4142-a9ee-4069b26afc64-kube-api-access-9vt2j\") pod \"auto-csr-approver-29568118-5t9gc\" (UID: \"c5f253ff-12ab-4142-a9ee-4069b26afc64\") " pod="openshift-infra/auto-csr-approver-29568118-5t9gc" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.461519 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" Mar 21 09:58:00 crc kubenswrapper[4932]: I0321 09:58:00.897339 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568118-5t9gc"] Mar 21 09:58:01 crc kubenswrapper[4932]: I0321 09:58:01.030727 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" event={"ID":"c5f253ff-12ab-4142-a9ee-4069b26afc64","Type":"ContainerStarted","Data":"d937567f1ac821541b04ac45cf30818a5f7761ef3f9d63af918a1e46571e582e"} Mar 21 09:58:02 crc kubenswrapper[4932]: I0321 09:58:02.040151 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" event={"ID":"c5f253ff-12ab-4142-a9ee-4069b26afc64","Type":"ContainerStarted","Data":"9ee931476ea7d2de1ecafe9cb5fc9ccb199be6f263341af2c4434d374a3240f0"} Mar 21 09:58:02 crc kubenswrapper[4932]: I0321 09:58:02.057194 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" podStartSLOduration=1.2918220919999999 podStartE2EDuration="2.057172245s" podCreationTimestamp="2026-03-21 09:58:00 +0000 UTC" firstStartedPulling="2026-03-21 09:58:00.90615673 +0000 UTC m=+3584.501354999" lastFinishedPulling="2026-03-21 09:58:01.671506883 +0000 UTC m=+3585.266705152" observedRunningTime="2026-03-21 09:58:02.053312663 +0000 UTC m=+3585.648510932" watchObservedRunningTime="2026-03-21 09:58:02.057172245 +0000 UTC m=+3585.652370514" Mar 21 09:58:03 crc kubenswrapper[4932]: I0321 09:58:03.053576 4932 generic.go:334] "Generic (PLEG): container finished" podID="c5f253ff-12ab-4142-a9ee-4069b26afc64" containerID="9ee931476ea7d2de1ecafe9cb5fc9ccb199be6f263341af2c4434d374a3240f0" exitCode=0 Mar 21 09:58:03 crc kubenswrapper[4932]: I0321 09:58:03.053621 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" event={"ID":"c5f253ff-12ab-4142-a9ee-4069b26afc64","Type":"ContainerDied","Data":"9ee931476ea7d2de1ecafe9cb5fc9ccb199be6f263341af2c4434d374a3240f0"} Mar 21 09:58:04 crc kubenswrapper[4932]: I0321 09:58:04.599441 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" Mar 21 09:58:04 crc kubenswrapper[4932]: I0321 09:58:04.691684 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vt2j\" (UniqueName: \"kubernetes.io/projected/c5f253ff-12ab-4142-a9ee-4069b26afc64-kube-api-access-9vt2j\") pod \"c5f253ff-12ab-4142-a9ee-4069b26afc64\" (UID: \"c5f253ff-12ab-4142-a9ee-4069b26afc64\") " Mar 21 09:58:04 crc kubenswrapper[4932]: I0321 09:58:04.699054 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f253ff-12ab-4142-a9ee-4069b26afc64-kube-api-access-9vt2j" (OuterVolumeSpecName: "kube-api-access-9vt2j") pod "c5f253ff-12ab-4142-a9ee-4069b26afc64" (UID: "c5f253ff-12ab-4142-a9ee-4069b26afc64"). InnerVolumeSpecName "kube-api-access-9vt2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:58:04 crc kubenswrapper[4932]: I0321 09:58:04.795342 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vt2j\" (UniqueName: \"kubernetes.io/projected/c5f253ff-12ab-4142-a9ee-4069b26afc64-kube-api-access-9vt2j\") on node \"crc\" DevicePath \"\"" Mar 21 09:58:05 crc kubenswrapper[4932]: I0321 09:58:05.076770 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" event={"ID":"c5f253ff-12ab-4142-a9ee-4069b26afc64","Type":"ContainerDied","Data":"d937567f1ac821541b04ac45cf30818a5f7761ef3f9d63af918a1e46571e582e"} Mar 21 09:58:05 crc kubenswrapper[4932]: I0321 09:58:05.076845 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d937567f1ac821541b04ac45cf30818a5f7761ef3f9d63af918a1e46571e582e" Mar 21 09:58:05 crc kubenswrapper[4932]: I0321 09:58:05.076953 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568118-5t9gc" Mar 21 09:58:05 crc kubenswrapper[4932]: I0321 09:58:05.150414 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568112-7wpck"] Mar 21 09:58:05 crc kubenswrapper[4932]: I0321 09:58:05.157089 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568112-7wpck"] Mar 21 09:58:05 crc kubenswrapper[4932]: I0321 09:58:05.743827 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ff4199-5b8d-47de-a44d-7c6dce5f7f89" path="/var/lib/kubelet/pods/03ff4199-5b8d-47de-a44d-7c6dce5f7f89/volumes" Mar 21 09:58:10 crc kubenswrapper[4932]: I0321 09:58:10.703683 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:58:10 crc kubenswrapper[4932]: E0321 09:58:10.705058 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:58:12 crc kubenswrapper[4932]: I0321 09:58:12.703091 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:58:12 crc kubenswrapper[4932]: E0321 09:58:12.703614 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:58:23 crc kubenswrapper[4932]: I0321 09:58:23.702576 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:58:23 crc kubenswrapper[4932]: E0321 09:58:23.703436 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:58:25 crc kubenswrapper[4932]: I0321 09:58:25.705474 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:58:25 crc kubenswrapper[4932]: E0321 09:58:25.706103 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:58:32 crc kubenswrapper[4932]: I0321 09:58:32.982050 4932 scope.go:117] "RemoveContainer" containerID="70815e032a0a774eccb40e1777f4ad12ef67dafa3b0b6cd5b7131231989b7d09" Mar 21 09:58:35 crc kubenswrapper[4932]: I0321 09:58:35.703440 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:58:35 crc kubenswrapper[4932]: E0321 09:58:35.704177 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:58:39 crc kubenswrapper[4932]: I0321 09:58:39.702826 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:58:39 crc kubenswrapper[4932]: E0321 09:58:39.703622 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:58:46 crc kubenswrapper[4932]: I0321 09:58:46.703128 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:58:46 crc kubenswrapper[4932]: E0321 09:58:46.703936 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:58:54 crc kubenswrapper[4932]: I0321 09:58:54.703882 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:58:54 crc kubenswrapper[4932]: E0321 09:58:54.704588 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:58:59 crc kubenswrapper[4932]: I0321 09:58:59.702958 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:58:59 crc kubenswrapper[4932]: E0321 09:58:59.703629 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:59:00 crc kubenswrapper[4932]: I0321 09:59:00.226178 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:59:00 crc kubenswrapper[4932]: I0321 09:59:00.226549 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:59:06 crc kubenswrapper[4932]: I0321 09:59:06.703026 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:59:06 crc kubenswrapper[4932]: E0321 09:59:06.703920 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.694707 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-698g5"] Mar 21 09:59:08 crc kubenswrapper[4932]: E0321 09:59:08.696322 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f253ff-12ab-4142-a9ee-4069b26afc64" containerName="oc" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.696347 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f253ff-12ab-4142-a9ee-4069b26afc64" containerName="oc" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.696593 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f253ff-12ab-4142-a9ee-4069b26afc64" containerName="oc" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.698417 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.717886 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-698g5"] Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.879928 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-utilities\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.880028 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-catalog-content\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.881226 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6tz9\" (UniqueName: \"kubernetes.io/projected/03c74269-1a30-41e3-a4f3-f8d98bb99067-kube-api-access-s6tz9\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.982775 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-utilities\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.983223 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-catalog-content\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.983436 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-utilities\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.983708 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6tz9\" (UniqueName: \"kubernetes.io/projected/03c74269-1a30-41e3-a4f3-f8d98bb99067-kube-api-access-s6tz9\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:08 crc kubenswrapper[4932]: I0321 09:59:08.984261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-catalog-content\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:09 crc kubenswrapper[4932]: I0321 09:59:09.009759 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6tz9\" (UniqueName: \"kubernetes.io/projected/03c74269-1a30-41e3-a4f3-f8d98bb99067-kube-api-access-s6tz9\") pod \"certified-operators-698g5\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:09 crc kubenswrapper[4932]: I0321 09:59:09.056496 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:09 crc kubenswrapper[4932]: I0321 09:59:09.581591 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-698g5"] Mar 21 09:59:09 crc kubenswrapper[4932]: I0321 09:59:09.774084 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-698g5" event={"ID":"03c74269-1a30-41e3-a4f3-f8d98bb99067","Type":"ContainerStarted","Data":"82cd8e478c7e4e25ff3b6ebf0e0f3c7f18ae40d3906ecb49a5100ce5f26a732b"} Mar 21 09:59:10 crc kubenswrapper[4932]: I0321 09:59:10.702731 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:59:10 crc kubenswrapper[4932]: E0321 09:59:10.703402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:59:10 crc kubenswrapper[4932]: I0321 09:59:10.787659 4932 generic.go:334] "Generic (PLEG): container finished" podID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerID="4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c" exitCode=0 Mar 21 09:59:10 crc kubenswrapper[4932]: I0321 09:59:10.787734 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-698g5" event={"ID":"03c74269-1a30-41e3-a4f3-f8d98bb99067","Type":"ContainerDied","Data":"4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c"} Mar 21 09:59:10 crc kubenswrapper[4932]: I0321 09:59:10.791171 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 09:59:11 crc kubenswrapper[4932]: I0321 09:59:11.797198 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-698g5" event={"ID":"03c74269-1a30-41e3-a4f3-f8d98bb99067","Type":"ContainerStarted","Data":"bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8"} Mar 21 09:59:12 crc kubenswrapper[4932]: I0321 09:59:12.811695 4932 generic.go:334] "Generic (PLEG): container finished" podID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerID="bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8" exitCode=0 Mar 21 09:59:12 crc kubenswrapper[4932]: I0321 09:59:12.811785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-698g5" event={"ID":"03c74269-1a30-41e3-a4f3-f8d98bb99067","Type":"ContainerDied","Data":"bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8"} Mar 21 09:59:13 crc kubenswrapper[4932]: I0321 09:59:13.823794 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-698g5" event={"ID":"03c74269-1a30-41e3-a4f3-f8d98bb99067","Type":"ContainerStarted","Data":"9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611"} Mar 21 09:59:13 crc kubenswrapper[4932]: I0321 09:59:13.852499 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-698g5" podStartSLOduration=3.432623685 podStartE2EDuration="5.852478028s" podCreationTimestamp="2026-03-21 09:59:08 +0000 UTC" firstStartedPulling="2026-03-21 09:59:10.790890237 +0000 UTC m=+3654.386088506" lastFinishedPulling="2026-03-21 09:59:13.21074459 +0000 UTC m=+3656.805942849" observedRunningTime="2026-03-21 09:59:13.844815098 +0000 UTC m=+3657.440013367" watchObservedRunningTime="2026-03-21 09:59:13.852478028 +0000 UTC m=+3657.447676297" Mar 21 09:59:19 crc kubenswrapper[4932]: I0321 09:59:19.057112 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:19 crc kubenswrapper[4932]: I0321 09:59:19.058436 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:19 crc kubenswrapper[4932]: I0321 09:59:19.123631 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:19 crc kubenswrapper[4932]: I0321 09:59:19.935855 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:20 crc kubenswrapper[4932]: I0321 09:59:20.005702 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-698g5"] Mar 21 09:59:21 crc kubenswrapper[4932]: I0321 09:59:21.703026 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:59:21 crc kubenswrapper[4932]: I0321 09:59:21.703511 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:59:21 crc kubenswrapper[4932]: E0321 09:59:21.703848 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:59:21 crc kubenswrapper[4932]: E0321 09:59:21.703848 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:59:21 crc kubenswrapper[4932]: I0321 09:59:21.900372 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-698g5" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerName="registry-server" containerID="cri-o://9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611" gracePeriod=2 Mar 21 09:59:22 crc kubenswrapper[4932]: E0321 09:59:22.007577 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03c74269_1a30_41e3_a4f3_f8d98bb99067.slice/crio-9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611.scope\": RecentStats: unable to find data in memory cache]" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.394999 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.457167 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6tz9\" (UniqueName: \"kubernetes.io/projected/03c74269-1a30-41e3-a4f3-f8d98bb99067-kube-api-access-s6tz9\") pod \"03c74269-1a30-41e3-a4f3-f8d98bb99067\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.457535 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-utilities\") pod \"03c74269-1a30-41e3-a4f3-f8d98bb99067\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.457647 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-catalog-content\") pod \"03c74269-1a30-41e3-a4f3-f8d98bb99067\" (UID: \"03c74269-1a30-41e3-a4f3-f8d98bb99067\") " Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.458340 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-utilities" (OuterVolumeSpecName: "utilities") pod "03c74269-1a30-41e3-a4f3-f8d98bb99067" (UID: "03c74269-1a30-41e3-a4f3-f8d98bb99067"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.464897 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c74269-1a30-41e3-a4f3-f8d98bb99067-kube-api-access-s6tz9" (OuterVolumeSpecName: "kube-api-access-s6tz9") pod "03c74269-1a30-41e3-a4f3-f8d98bb99067" (UID: "03c74269-1a30-41e3-a4f3-f8d98bb99067"). InnerVolumeSpecName "kube-api-access-s6tz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.560244 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.560286 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6tz9\" (UniqueName: \"kubernetes.io/projected/03c74269-1a30-41e3-a4f3-f8d98bb99067-kube-api-access-s6tz9\") on node \"crc\" DevicePath \"\"" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.738925 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03c74269-1a30-41e3-a4f3-f8d98bb99067" (UID: "03c74269-1a30-41e3-a4f3-f8d98bb99067"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.763873 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03c74269-1a30-41e3-a4f3-f8d98bb99067-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.913674 4932 generic.go:334] "Generic (PLEG): container finished" podID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerID="9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611" exitCode=0 Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.913734 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-698g5" event={"ID":"03c74269-1a30-41e3-a4f3-f8d98bb99067","Type":"ContainerDied","Data":"9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611"} Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.913768 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-698g5" event={"ID":"03c74269-1a30-41e3-a4f3-f8d98bb99067","Type":"ContainerDied","Data":"82cd8e478c7e4e25ff3b6ebf0e0f3c7f18ae40d3906ecb49a5100ce5f26a732b"} Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.913789 4932 scope.go:117] "RemoveContainer" containerID="9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.913787 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-698g5" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.935601 4932 scope.go:117] "RemoveContainer" containerID="bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.965777 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-698g5"] Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.986492 4932 scope.go:117] "RemoveContainer" containerID="4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c" Mar 21 09:59:22 crc kubenswrapper[4932]: I0321 09:59:22.990884 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-698g5"] Mar 21 09:59:23 crc kubenswrapper[4932]: I0321 09:59:23.031440 4932 scope.go:117] "RemoveContainer" containerID="9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611" Mar 21 09:59:23 crc kubenswrapper[4932]: E0321 09:59:23.032003 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611\": container with ID starting with 9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611 not found: ID does not exist" containerID="9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611" Mar 21 09:59:23 crc kubenswrapper[4932]: I0321 09:59:23.032070 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611"} err="failed to get container status \"9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611\": rpc error: code = NotFound desc = could not find container \"9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611\": container with ID starting with 9c8f81495886af80094909ca18183723f10131869e0711421ce75112fbadc611 not found: ID does not exist" Mar 21 09:59:23 crc kubenswrapper[4932]: I0321 09:59:23.032095 4932 scope.go:117] "RemoveContainer" containerID="bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8" Mar 21 09:59:23 crc kubenswrapper[4932]: E0321 09:59:23.032401 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8\": container with ID starting with bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8 not found: ID does not exist" containerID="bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8" Mar 21 09:59:23 crc kubenswrapper[4932]: I0321 09:59:23.032447 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8"} err="failed to get container status \"bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8\": rpc error: code = NotFound desc = could not find container \"bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8\": container with ID starting with bc39c68f7b7217bcb7239b0ac23ad1ca1134990c8cdb8074bccd67418095acb8 not found: ID does not exist" Mar 21 09:59:23 crc kubenswrapper[4932]: I0321 09:59:23.032468 4932 scope.go:117] "RemoveContainer" containerID="4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c" Mar 21 09:59:23 crc kubenswrapper[4932]: E0321 09:59:23.033138 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c\": container with ID starting with 4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c not found: ID does not exist" containerID="4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c" Mar 21 09:59:23 crc kubenswrapper[4932]: I0321 09:59:23.033200 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c"} err="failed to get container status \"4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c\": rpc error: code = NotFound desc = could not find container \"4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c\": container with ID starting with 4eeee7ee6f9382c01b28ca48c302e6815c15de638d08cbc0ec649a280464909c not found: ID does not exist" Mar 21 09:59:23 crc kubenswrapper[4932]: I0321 09:59:23.713592 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" path="/var/lib/kubelet/pods/03c74269-1a30-41e3-a4f3-f8d98bb99067/volumes" Mar 21 09:59:30 crc kubenswrapper[4932]: I0321 09:59:30.225330 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 09:59:30 crc kubenswrapper[4932]: I0321 09:59:30.226030 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 09:59:34 crc kubenswrapper[4932]: I0321 09:59:34.702778 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:59:34 crc kubenswrapper[4932]: E0321 09:59:34.703432 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:59:34 crc kubenswrapper[4932]: I0321 09:59:34.703435 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:59:34 crc kubenswrapper[4932]: E0321 09:59:34.703625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:59:46 crc kubenswrapper[4932]: I0321 09:59:46.703087 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:59:46 crc kubenswrapper[4932]: E0321 09:59:46.703829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:59:48 crc kubenswrapper[4932]: I0321 09:59:48.704182 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:59:48 crc kubenswrapper[4932]: E0321 09:59:48.704705 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 09:59:58 crc kubenswrapper[4932]: I0321 09:59:58.708781 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 09:59:58 crc kubenswrapper[4932]: E0321 09:59:58.709961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 09:59:59 crc kubenswrapper[4932]: I0321 09:59:59.702939 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 09:59:59 crc kubenswrapper[4932]: E0321 09:59:59.703169 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.143258 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568120-wt6rh"] Mar 21 10:00:00 crc kubenswrapper[4932]: E0321 10:00:00.144363 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerName="extract-content" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.144399 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerName="extract-content" Mar 21 10:00:00 crc kubenswrapper[4932]: E0321 10:00:00.144414 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerName="registry-server" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.144420 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerName="registry-server" Mar 21 10:00:00 crc kubenswrapper[4932]: E0321 10:00:00.144451 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerName="extract-utilities" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.144458 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerName="extract-utilities" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.144644 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c74269-1a30-41e3-a4f3-f8d98bb99067" containerName="registry-server" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.145369 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568120-wt6rh" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.148259 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.149269 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.154418 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6"] Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.155988 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.157775 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.158304 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.159118 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.164817 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568120-wt6rh"] Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.176684 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6"] Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.225645 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.225703 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.225740 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.226537 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.226598 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" gracePeriod=600 Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.318472 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed29c77f-9b12-4b36-acf2-35a7d949449d-secret-volume\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.318547 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc5wz\" (UniqueName: \"kubernetes.io/projected/3ba1a616-59a2-40cd-98e3-b051cdf06182-kube-api-access-fc5wz\") pod \"auto-csr-approver-29568120-wt6rh\" (UID: \"3ba1a616-59a2-40cd-98e3-b051cdf06182\") " pod="openshift-infra/auto-csr-approver-29568120-wt6rh" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.319023 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9ffm\" (UniqueName: \"kubernetes.io/projected/ed29c77f-9b12-4b36-acf2-35a7d949449d-kube-api-access-s9ffm\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.319091 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed29c77f-9b12-4b36-acf2-35a7d949449d-config-volume\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: E0321 10:00:00.356731 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.421383 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9ffm\" (UniqueName: \"kubernetes.io/projected/ed29c77f-9b12-4b36-acf2-35a7d949449d-kube-api-access-s9ffm\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.421760 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed29c77f-9b12-4b36-acf2-35a7d949449d-config-volume\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.421864 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed29c77f-9b12-4b36-acf2-35a7d949449d-secret-volume\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.421933 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc5wz\" (UniqueName: \"kubernetes.io/projected/3ba1a616-59a2-40cd-98e3-b051cdf06182-kube-api-access-fc5wz\") pod \"auto-csr-approver-29568120-wt6rh\" (UID: \"3ba1a616-59a2-40cd-98e3-b051cdf06182\") " pod="openshift-infra/auto-csr-approver-29568120-wt6rh" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.423100 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed29c77f-9b12-4b36-acf2-35a7d949449d-config-volume\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.431297 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed29c77f-9b12-4b36-acf2-35a7d949449d-secret-volume\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.437183 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc5wz\" (UniqueName: \"kubernetes.io/projected/3ba1a616-59a2-40cd-98e3-b051cdf06182-kube-api-access-fc5wz\") pod \"auto-csr-approver-29568120-wt6rh\" (UID: \"3ba1a616-59a2-40cd-98e3-b051cdf06182\") " pod="openshift-infra/auto-csr-approver-29568120-wt6rh" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.437868 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9ffm\" (UniqueName: \"kubernetes.io/projected/ed29c77f-9b12-4b36-acf2-35a7d949449d-kube-api-access-s9ffm\") pod \"collect-profiles-29568120-p94f6\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.474854 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568120-wt6rh" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.487421 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.931383 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568120-wt6rh"] Mar 21 10:00:00 crc kubenswrapper[4932]: W0321 10:00:00.943868 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ba1a616_59a2_40cd_98e3_b051cdf06182.slice/crio-1911bf5c4829c333dae47b633dda6392c954899e6acd7674e8d5cdeb93659c04 WatchSource:0}: Error finding container 1911bf5c4829c333dae47b633dda6392c954899e6acd7674e8d5cdeb93659c04: Status 404 returned error can't find the container with id 1911bf5c4829c333dae47b633dda6392c954899e6acd7674e8d5cdeb93659c04 Mar 21 10:00:00 crc kubenswrapper[4932]: I0321 10:00:00.946040 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6"] Mar 21 10:00:01 crc kubenswrapper[4932]: I0321 10:00:01.289305 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" exitCode=0 Mar 21 10:00:01 crc kubenswrapper[4932]: I0321 10:00:01.289414 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765"} Mar 21 10:00:01 crc kubenswrapper[4932]: I0321 10:00:01.289844 4932 scope.go:117] "RemoveContainer" containerID="fc77fa1ae642d9a4d6ebcb624085a9e1b7e1b750395461e09bf63b406f7f53d9" Mar 21 10:00:01 crc kubenswrapper[4932]: I0321 10:00:01.290901 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:00:01 crc kubenswrapper[4932]: E0321 10:00:01.291436 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:00:01 crc kubenswrapper[4932]: I0321 10:00:01.291450 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568120-wt6rh" event={"ID":"3ba1a616-59a2-40cd-98e3-b051cdf06182","Type":"ContainerStarted","Data":"1911bf5c4829c333dae47b633dda6392c954899e6acd7674e8d5cdeb93659c04"} Mar 21 10:00:01 crc kubenswrapper[4932]: I0321 10:00:01.294705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" event={"ID":"ed29c77f-9b12-4b36-acf2-35a7d949449d","Type":"ContainerStarted","Data":"cf80544454cffbcb1da72a859ac1227de8e409cb457032ac09daecd42ceb41de"} Mar 21 10:00:01 crc kubenswrapper[4932]: I0321 10:00:01.294755 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" event={"ID":"ed29c77f-9b12-4b36-acf2-35a7d949449d","Type":"ContainerStarted","Data":"0b0ed09657d7dc1701e69f598073a4f41b9ee968ef465074926d0c5e5a8bfea0"} Mar 21 10:00:01 crc kubenswrapper[4932]: I0321 10:00:01.329405 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" podStartSLOduration=1.3293851509999999 podStartE2EDuration="1.329385151s" podCreationTimestamp="2026-03-21 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 10:00:01.322090473 +0000 UTC m=+3704.917288752" watchObservedRunningTime="2026-03-21 10:00:01.329385151 +0000 UTC m=+3704.924583420" Mar 21 10:00:02 crc kubenswrapper[4932]: I0321 10:00:02.305564 4932 generic.go:334] "Generic (PLEG): container finished" podID="ed29c77f-9b12-4b36-acf2-35a7d949449d" containerID="cf80544454cffbcb1da72a859ac1227de8e409cb457032ac09daecd42ceb41de" exitCode=0 Mar 21 10:00:02 crc kubenswrapper[4932]: I0321 10:00:02.305620 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" event={"ID":"ed29c77f-9b12-4b36-acf2-35a7d949449d","Type":"ContainerDied","Data":"cf80544454cffbcb1da72a859ac1227de8e409cb457032ac09daecd42ceb41de"} Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.660681 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.792826 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed29c77f-9b12-4b36-acf2-35a7d949449d-config-volume\") pod \"ed29c77f-9b12-4b36-acf2-35a7d949449d\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.792969 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed29c77f-9b12-4b36-acf2-35a7d949449d-secret-volume\") pod \"ed29c77f-9b12-4b36-acf2-35a7d949449d\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.793104 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9ffm\" (UniqueName: \"kubernetes.io/projected/ed29c77f-9b12-4b36-acf2-35a7d949449d-kube-api-access-s9ffm\") pod \"ed29c77f-9b12-4b36-acf2-35a7d949449d\" (UID: \"ed29c77f-9b12-4b36-acf2-35a7d949449d\") " Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.793703 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed29c77f-9b12-4b36-acf2-35a7d949449d-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed29c77f-9b12-4b36-acf2-35a7d949449d" (UID: "ed29c77f-9b12-4b36-acf2-35a7d949449d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.799643 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed29c77f-9b12-4b36-acf2-35a7d949449d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed29c77f-9b12-4b36-acf2-35a7d949449d" (UID: "ed29c77f-9b12-4b36-acf2-35a7d949449d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.802592 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed29c77f-9b12-4b36-acf2-35a7d949449d-kube-api-access-s9ffm" (OuterVolumeSpecName: "kube-api-access-s9ffm") pod "ed29c77f-9b12-4b36-acf2-35a7d949449d" (UID: "ed29c77f-9b12-4b36-acf2-35a7d949449d"). InnerVolumeSpecName "kube-api-access-s9ffm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.895677 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed29c77f-9b12-4b36-acf2-35a7d949449d-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.895714 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed29c77f-9b12-4b36-acf2-35a7d949449d-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 10:00:03 crc kubenswrapper[4932]: I0321 10:00:03.895723 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9ffm\" (UniqueName: \"kubernetes.io/projected/ed29c77f-9b12-4b36-acf2-35a7d949449d-kube-api-access-s9ffm\") on node \"crc\" DevicePath \"\"" Mar 21 10:00:04 crc kubenswrapper[4932]: I0321 10:00:04.326609 4932 generic.go:334] "Generic (PLEG): container finished" podID="3ba1a616-59a2-40cd-98e3-b051cdf06182" containerID="156d9f5165dc901b0275d4ee5ef332436703f81215c05a1e25d019a6c177db03" exitCode=0 Mar 21 10:00:04 crc kubenswrapper[4932]: I0321 10:00:04.326803 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568120-wt6rh" event={"ID":"3ba1a616-59a2-40cd-98e3-b051cdf06182","Type":"ContainerDied","Data":"156d9f5165dc901b0275d4ee5ef332436703f81215c05a1e25d019a6c177db03"} Mar 21 10:00:04 crc kubenswrapper[4932]: I0321 10:00:04.328887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" event={"ID":"ed29c77f-9b12-4b36-acf2-35a7d949449d","Type":"ContainerDied","Data":"0b0ed09657d7dc1701e69f598073a4f41b9ee968ef465074926d0c5e5a8bfea0"} Mar 21 10:00:04 crc kubenswrapper[4932]: I0321 10:00:04.329018 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b0ed09657d7dc1701e69f598073a4f41b9ee968ef465074926d0c5e5a8bfea0" Mar 21 10:00:04 crc kubenswrapper[4932]: I0321 10:00:04.328952 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568120-p94f6" Mar 21 10:00:04 crc kubenswrapper[4932]: I0321 10:00:04.730764 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh"] Mar 21 10:00:04 crc kubenswrapper[4932]: I0321 10:00:04.738208 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568075-2kfkh"] Mar 21 10:00:05 crc kubenswrapper[4932]: I0321 10:00:05.679866 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568120-wt6rh" Mar 21 10:00:05 crc kubenswrapper[4932]: I0321 10:00:05.716215 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae52ce4a-9d6e-4032-ad79-67343a8cd2db" path="/var/lib/kubelet/pods/ae52ce4a-9d6e-4032-ad79-67343a8cd2db/volumes" Mar 21 10:00:05 crc kubenswrapper[4932]: I0321 10:00:05.837030 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc5wz\" (UniqueName: \"kubernetes.io/projected/3ba1a616-59a2-40cd-98e3-b051cdf06182-kube-api-access-fc5wz\") pod \"3ba1a616-59a2-40cd-98e3-b051cdf06182\" (UID: \"3ba1a616-59a2-40cd-98e3-b051cdf06182\") " Mar 21 10:00:05 crc kubenswrapper[4932]: I0321 10:00:05.843703 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba1a616-59a2-40cd-98e3-b051cdf06182-kube-api-access-fc5wz" (OuterVolumeSpecName: "kube-api-access-fc5wz") pod "3ba1a616-59a2-40cd-98e3-b051cdf06182" (UID: "3ba1a616-59a2-40cd-98e3-b051cdf06182"). InnerVolumeSpecName "kube-api-access-fc5wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:00:05 crc kubenswrapper[4932]: I0321 10:00:05.939412 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc5wz\" (UniqueName: \"kubernetes.io/projected/3ba1a616-59a2-40cd-98e3-b051cdf06182-kube-api-access-fc5wz\") on node \"crc\" DevicePath \"\"" Mar 21 10:00:06 crc kubenswrapper[4932]: I0321 10:00:06.344446 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568120-wt6rh" event={"ID":"3ba1a616-59a2-40cd-98e3-b051cdf06182","Type":"ContainerDied","Data":"1911bf5c4829c333dae47b633dda6392c954899e6acd7674e8d5cdeb93659c04"} Mar 21 10:00:06 crc kubenswrapper[4932]: I0321 10:00:06.344489 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1911bf5c4829c333dae47b633dda6392c954899e6acd7674e8d5cdeb93659c04" Mar 21 10:00:06 crc kubenswrapper[4932]: I0321 10:00:06.344508 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568120-wt6rh" Mar 21 10:00:06 crc kubenswrapper[4932]: I0321 10:00:06.735396 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568114-s6zcx"] Mar 21 10:00:06 crc kubenswrapper[4932]: I0321 10:00:06.743216 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568114-s6zcx"] Mar 21 10:00:07 crc kubenswrapper[4932]: I0321 10:00:07.718660 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52e8f5b2-8558-4c32-b5ab-b329e722b09b" path="/var/lib/kubelet/pods/52e8f5b2-8558-4c32-b5ab-b329e722b09b/volumes" Mar 21 10:00:10 crc kubenswrapper[4932]: I0321 10:00:10.703571 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:00:10 crc kubenswrapper[4932]: I0321 10:00:10.706071 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:00:10 crc kubenswrapper[4932]: E0321 10:00:10.706291 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:00:10 crc kubenswrapper[4932]: E0321 10:00:10.706411 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:00:15 crc kubenswrapper[4932]: I0321 10:00:15.703011 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:00:15 crc kubenswrapper[4932]: E0321 10:00:15.703843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:00:22 crc kubenswrapper[4932]: I0321 10:00:22.703129 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:00:22 crc kubenswrapper[4932]: E0321 10:00:22.703869 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:00:24 crc kubenswrapper[4932]: I0321 10:00:24.703148 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:00:24 crc kubenswrapper[4932]: E0321 10:00:24.703785 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:00:26 crc kubenswrapper[4932]: I0321 10:00:26.702707 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:00:26 crc kubenswrapper[4932]: E0321 10:00:26.703667 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:00:33 crc kubenswrapper[4932]: I0321 10:00:33.105440 4932 scope.go:117] "RemoveContainer" containerID="1f3662e646d60a76b9473fd7f65376f3faba5939981c389aac057bd55466effd" Mar 21 10:00:33 crc kubenswrapper[4932]: I0321 10:00:33.157454 4932 scope.go:117] "RemoveContainer" containerID="8cd598d2692fb3fe5cde4d23e428e67a79bb9376ad4bd8b567c339be8230878b" Mar 21 10:00:34 crc kubenswrapper[4932]: I0321 10:00:34.702765 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:00:34 crc kubenswrapper[4932]: E0321 10:00:34.703371 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:00:36 crc kubenswrapper[4932]: I0321 10:00:36.703692 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:00:36 crc kubenswrapper[4932]: E0321 10:00:36.704575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:00:37 crc kubenswrapper[4932]: I0321 10:00:37.709273 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:00:37 crc kubenswrapper[4932]: E0321 10:00:37.709866 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:00:47 crc kubenswrapper[4932]: I0321 10:00:47.712011 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:00:47 crc kubenswrapper[4932]: E0321 10:00:47.712760 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:00:48 crc kubenswrapper[4932]: I0321 10:00:48.702450 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:00:48 crc kubenswrapper[4932]: E0321 10:00:48.702800 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:00:50 crc kubenswrapper[4932]: I0321 10:00:50.703511 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:00:50 crc kubenswrapper[4932]: E0321 10:00:50.704034 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.149918 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29568121-t7kb5"] Mar 21 10:01:00 crc kubenswrapper[4932]: E0321 10:01:00.152264 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba1a616-59a2-40cd-98e3-b051cdf06182" containerName="oc" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.152612 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba1a616-59a2-40cd-98e3-b051cdf06182" containerName="oc" Mar 21 10:01:00 crc kubenswrapper[4932]: E0321 10:01:00.152695 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed29c77f-9b12-4b36-acf2-35a7d949449d" containerName="collect-profiles" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.152775 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed29c77f-9b12-4b36-acf2-35a7d949449d" containerName="collect-profiles" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.153091 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed29c77f-9b12-4b36-acf2-35a7d949449d" containerName="collect-profiles" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.153206 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba1a616-59a2-40cd-98e3-b051cdf06182" containerName="oc" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.154421 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.161391 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29568121-t7kb5"] Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.343433 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-fernet-keys\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.343503 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-config-data\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.344402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp86l\" (UniqueName: \"kubernetes.io/projected/5f6ea447-7a62-4c99-b3a6-3afd311e976b-kube-api-access-rp86l\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.344472 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-combined-ca-bundle\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.446523 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-fernet-keys\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.446574 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-config-data\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.446621 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp86l\" (UniqueName: \"kubernetes.io/projected/5f6ea447-7a62-4c99-b3a6-3afd311e976b-kube-api-access-rp86l\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.446654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-combined-ca-bundle\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.452193 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-combined-ca-bundle\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.452338 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-config-data\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.458721 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-fernet-keys\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.463171 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp86l\" (UniqueName: \"kubernetes.io/projected/5f6ea447-7a62-4c99-b3a6-3afd311e976b-kube-api-access-rp86l\") pod \"keystone-cron-29568121-t7kb5\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.485582 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.704656 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:01:00 crc kubenswrapper[4932]: E0321 10:01:00.705158 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:01:00 crc kubenswrapper[4932]: I0321 10:01:00.897284 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29568121-t7kb5"] Mar 21 10:01:01 crc kubenswrapper[4932]: I0321 10:01:01.192128 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29568121-t7kb5" event={"ID":"5f6ea447-7a62-4c99-b3a6-3afd311e976b","Type":"ContainerStarted","Data":"dc4638ca16d40311da524574c3d89943e97f4cc0181c27670feed9c8fcff1422"} Mar 21 10:01:01 crc kubenswrapper[4932]: I0321 10:01:01.192500 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29568121-t7kb5" event={"ID":"5f6ea447-7a62-4c99-b3a6-3afd311e976b","Type":"ContainerStarted","Data":"becaf7d50ba027218aa94ba6d78a64860e07d7633126bab31975e267055cfaff"} Mar 21 10:01:01 crc kubenswrapper[4932]: I0321 10:01:01.210823 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29568121-t7kb5" podStartSLOduration=1.210803203 podStartE2EDuration="1.210803203s" podCreationTimestamp="2026-03-21 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 10:01:01.205040984 +0000 UTC m=+3764.800239253" watchObservedRunningTime="2026-03-21 10:01:01.210803203 +0000 UTC m=+3764.806001472" Mar 21 10:01:02 crc kubenswrapper[4932]: I0321 10:01:02.702392 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:01:02 crc kubenswrapper[4932]: E0321 10:01:02.703044 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:01:02 crc kubenswrapper[4932]: I0321 10:01:02.703777 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:01:02 crc kubenswrapper[4932]: E0321 10:01:02.704015 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:01:04 crc kubenswrapper[4932]: I0321 10:01:04.221788 4932 generic.go:334] "Generic (PLEG): container finished" podID="5f6ea447-7a62-4c99-b3a6-3afd311e976b" containerID="dc4638ca16d40311da524574c3d89943e97f4cc0181c27670feed9c8fcff1422" exitCode=0 Mar 21 10:01:04 crc kubenswrapper[4932]: I0321 10:01:04.221862 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29568121-t7kb5" event={"ID":"5f6ea447-7a62-4c99-b3a6-3afd311e976b","Type":"ContainerDied","Data":"dc4638ca16d40311da524574c3d89943e97f4cc0181c27670feed9c8fcff1422"} Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.573532 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.764265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-config-data\") pod \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.764315 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp86l\" (UniqueName: \"kubernetes.io/projected/5f6ea447-7a62-4c99-b3a6-3afd311e976b-kube-api-access-rp86l\") pod \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.764395 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-fernet-keys\") pod \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.764568 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-combined-ca-bundle\") pod \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\" (UID: \"5f6ea447-7a62-4c99-b3a6-3afd311e976b\") " Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.779363 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6ea447-7a62-4c99-b3a6-3afd311e976b-kube-api-access-rp86l" (OuterVolumeSpecName: "kube-api-access-rp86l") pod "5f6ea447-7a62-4c99-b3a6-3afd311e976b" (UID: "5f6ea447-7a62-4c99-b3a6-3afd311e976b"). InnerVolumeSpecName "kube-api-access-rp86l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.779327 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5f6ea447-7a62-4c99-b3a6-3afd311e976b" (UID: "5f6ea447-7a62-4c99-b3a6-3afd311e976b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.801242 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f6ea447-7a62-4c99-b3a6-3afd311e976b" (UID: "5f6ea447-7a62-4c99-b3a6-3afd311e976b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.827613 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-config-data" (OuterVolumeSpecName: "config-data") pod "5f6ea447-7a62-4c99-b3a6-3afd311e976b" (UID: "5f6ea447-7a62-4c99-b3a6-3afd311e976b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.870392 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.870448 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp86l\" (UniqueName: \"kubernetes.io/projected/5f6ea447-7a62-4c99-b3a6-3afd311e976b-kube-api-access-rp86l\") on node \"crc\" DevicePath \"\"" Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.870471 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 21 10:01:05 crc kubenswrapper[4932]: I0321 10:01:05.870490 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f6ea447-7a62-4c99-b3a6-3afd311e976b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 10:01:06 crc kubenswrapper[4932]: I0321 10:01:06.239270 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29568121-t7kb5" event={"ID":"5f6ea447-7a62-4c99-b3a6-3afd311e976b","Type":"ContainerDied","Data":"becaf7d50ba027218aa94ba6d78a64860e07d7633126bab31975e267055cfaff"} Mar 21 10:01:06 crc kubenswrapper[4932]: I0321 10:01:06.239307 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="becaf7d50ba027218aa94ba6d78a64860e07d7633126bab31975e267055cfaff" Mar 21 10:01:06 crc kubenswrapper[4932]: I0321 10:01:06.239383 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29568121-t7kb5" Mar 21 10:01:14 crc kubenswrapper[4932]: I0321 10:01:14.702293 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:01:14 crc kubenswrapper[4932]: I0321 10:01:14.703035 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:01:14 crc kubenswrapper[4932]: E0321 10:01:14.703090 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:01:14 crc kubenswrapper[4932]: I0321 10:01:14.703275 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:01:14 crc kubenswrapper[4932]: E0321 10:01:14.703282 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:01:14 crc kubenswrapper[4932]: E0321 10:01:14.703522 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:01:25 crc kubenswrapper[4932]: I0321 10:01:25.702208 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:01:25 crc kubenswrapper[4932]: E0321 10:01:25.702987 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:01:27 crc kubenswrapper[4932]: I0321 10:01:27.709375 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:01:27 crc kubenswrapper[4932]: E0321 10:01:27.710867 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:01:28 crc kubenswrapper[4932]: I0321 10:01:28.703442 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:01:28 crc kubenswrapper[4932]: E0321 10:01:28.703825 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:01:38 crc kubenswrapper[4932]: I0321 10:01:38.702809 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:01:38 crc kubenswrapper[4932]: I0321 10:01:38.703477 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:01:38 crc kubenswrapper[4932]: E0321 10:01:38.703620 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:01:38 crc kubenswrapper[4932]: E0321 10:01:38.703763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:01:39 crc kubenswrapper[4932]: I0321 10:01:39.702878 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:01:39 crc kubenswrapper[4932]: E0321 10:01:39.703118 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:01:52 crc kubenswrapper[4932]: I0321 10:01:52.702810 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:01:52 crc kubenswrapper[4932]: E0321 10:01:52.703596 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:01:53 crc kubenswrapper[4932]: I0321 10:01:53.703332 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:01:53 crc kubenswrapper[4932]: E0321 10:01:53.703632 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:01:53 crc kubenswrapper[4932]: I0321 10:01:53.703654 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:01:53 crc kubenswrapper[4932]: E0321 10:01:53.703973 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.145114 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568122-vs97z"] Mar 21 10:02:00 crc kubenswrapper[4932]: E0321 10:02:00.146078 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6ea447-7a62-4c99-b3a6-3afd311e976b" containerName="keystone-cron" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.146090 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6ea447-7a62-4c99-b3a6-3afd311e976b" containerName="keystone-cron" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.146286 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6ea447-7a62-4c99-b3a6-3afd311e976b" containerName="keystone-cron" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.147071 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568122-vs97z" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.149242 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.149662 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.150464 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.154169 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568122-vs97z"] Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.251806 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz7tx\" (UniqueName: \"kubernetes.io/projected/24aa54d7-3784-4be4-a6a2-414ce29ae040-kube-api-access-rz7tx\") pod \"auto-csr-approver-29568122-vs97z\" (UID: \"24aa54d7-3784-4be4-a6a2-414ce29ae040\") " pod="openshift-infra/auto-csr-approver-29568122-vs97z" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.353867 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz7tx\" (UniqueName: \"kubernetes.io/projected/24aa54d7-3784-4be4-a6a2-414ce29ae040-kube-api-access-rz7tx\") pod \"auto-csr-approver-29568122-vs97z\" (UID: \"24aa54d7-3784-4be4-a6a2-414ce29ae040\") " pod="openshift-infra/auto-csr-approver-29568122-vs97z" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.376126 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz7tx\" (UniqueName: \"kubernetes.io/projected/24aa54d7-3784-4be4-a6a2-414ce29ae040-kube-api-access-rz7tx\") pod \"auto-csr-approver-29568122-vs97z\" (UID: \"24aa54d7-3784-4be4-a6a2-414ce29ae040\") " pod="openshift-infra/auto-csr-approver-29568122-vs97z" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.476162 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568122-vs97z" Mar 21 10:02:00 crc kubenswrapper[4932]: I0321 10:02:00.925445 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568122-vs97z"] Mar 21 10:02:01 crc kubenswrapper[4932]: I0321 10:02:01.059797 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568122-vs97z" event={"ID":"24aa54d7-3784-4be4-a6a2-414ce29ae040","Type":"ContainerStarted","Data":"7da8b3ae7fb2bb1b3534b3c0b56e1606cdccc5f0f29e3d46783fd50195973264"} Mar 21 10:02:02 crc kubenswrapper[4932]: I0321 10:02:02.068553 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568122-vs97z" event={"ID":"24aa54d7-3784-4be4-a6a2-414ce29ae040","Type":"ContainerStarted","Data":"247426f2579aa4be0ef11e1f93dd9b7210ef4b4f94138929e2f11d9797f39259"} Mar 21 10:02:02 crc kubenswrapper[4932]: I0321 10:02:02.101696 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568122-vs97z" podStartSLOduration=1.328509994 podStartE2EDuration="2.101668982s" podCreationTimestamp="2026-03-21 10:02:00 +0000 UTC" firstStartedPulling="2026-03-21 10:02:00.930077365 +0000 UTC m=+3824.525275634" lastFinishedPulling="2026-03-21 10:02:01.703236343 +0000 UTC m=+3825.298434622" observedRunningTime="2026-03-21 10:02:02.092086613 +0000 UTC m=+3825.687284882" watchObservedRunningTime="2026-03-21 10:02:02.101668982 +0000 UTC m=+3825.696867291" Mar 21 10:02:03 crc kubenswrapper[4932]: I0321 10:02:03.077610 4932 generic.go:334] "Generic (PLEG): container finished" podID="24aa54d7-3784-4be4-a6a2-414ce29ae040" containerID="247426f2579aa4be0ef11e1f93dd9b7210ef4b4f94138929e2f11d9797f39259" exitCode=0 Mar 21 10:02:03 crc kubenswrapper[4932]: I0321 10:02:03.077906 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568122-vs97z" event={"ID":"24aa54d7-3784-4be4-a6a2-414ce29ae040","Type":"ContainerDied","Data":"247426f2579aa4be0ef11e1f93dd9b7210ef4b4f94138929e2f11d9797f39259"} Mar 21 10:02:04 crc kubenswrapper[4932]: I0321 10:02:04.413764 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568122-vs97z" Mar 21 10:02:04 crc kubenswrapper[4932]: I0321 10:02:04.542951 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz7tx\" (UniqueName: \"kubernetes.io/projected/24aa54d7-3784-4be4-a6a2-414ce29ae040-kube-api-access-rz7tx\") pod \"24aa54d7-3784-4be4-a6a2-414ce29ae040\" (UID: \"24aa54d7-3784-4be4-a6a2-414ce29ae040\") " Mar 21 10:02:04 crc kubenswrapper[4932]: I0321 10:02:04.548871 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24aa54d7-3784-4be4-a6a2-414ce29ae040-kube-api-access-rz7tx" (OuterVolumeSpecName: "kube-api-access-rz7tx") pod "24aa54d7-3784-4be4-a6a2-414ce29ae040" (UID: "24aa54d7-3784-4be4-a6a2-414ce29ae040"). InnerVolumeSpecName "kube-api-access-rz7tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:02:04 crc kubenswrapper[4932]: I0321 10:02:04.645373 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz7tx\" (UniqueName: \"kubernetes.io/projected/24aa54d7-3784-4be4-a6a2-414ce29ae040-kube-api-access-rz7tx\") on node \"crc\" DevicePath \"\"" Mar 21 10:02:05 crc kubenswrapper[4932]: I0321 10:02:05.098022 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568122-vs97z" event={"ID":"24aa54d7-3784-4be4-a6a2-414ce29ae040","Type":"ContainerDied","Data":"7da8b3ae7fb2bb1b3534b3c0b56e1606cdccc5f0f29e3d46783fd50195973264"} Mar 21 10:02:05 crc kubenswrapper[4932]: I0321 10:02:05.098069 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7da8b3ae7fb2bb1b3534b3c0b56e1606cdccc5f0f29e3d46783fd50195973264" Mar 21 10:02:05 crc kubenswrapper[4932]: I0321 10:02:05.098136 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568122-vs97z" Mar 21 10:02:05 crc kubenswrapper[4932]: I0321 10:02:05.178805 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568116-95rq5"] Mar 21 10:02:05 crc kubenswrapper[4932]: I0321 10:02:05.187056 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568116-95rq5"] Mar 21 10:02:05 crc kubenswrapper[4932]: I0321 10:02:05.715288 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33b86a9-169e-43e0-9d21-ecc4aaf17f8b" path="/var/lib/kubelet/pods/f33b86a9-169e-43e0-9d21-ecc4aaf17f8b/volumes" Mar 21 10:02:06 crc kubenswrapper[4932]: I0321 10:02:06.702278 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:02:06 crc kubenswrapper[4932]: I0321 10:02:06.702666 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:02:06 crc kubenswrapper[4932]: E0321 10:02:06.702854 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:02:06 crc kubenswrapper[4932]: E0321 10:02:06.702866 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:02:07 crc kubenswrapper[4932]: I0321 10:02:07.710017 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:02:07 crc kubenswrapper[4932]: E0321 10:02:07.710305 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:02:17 crc kubenswrapper[4932]: I0321 10:02:17.711679 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:02:17 crc kubenswrapper[4932]: E0321 10:02:17.712361 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:02:18 crc kubenswrapper[4932]: I0321 10:02:18.702387 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:02:18 crc kubenswrapper[4932]: E0321 10:02:18.702890 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:02:20 crc kubenswrapper[4932]: I0321 10:02:20.703289 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:02:20 crc kubenswrapper[4932]: E0321 10:02:20.703863 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:02:31 crc kubenswrapper[4932]: I0321 10:02:31.702648 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:02:31 crc kubenswrapper[4932]: I0321 10:02:31.702944 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:02:31 crc kubenswrapper[4932]: E0321 10:02:31.703134 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:02:31 crc kubenswrapper[4932]: E0321 10:02:31.703183 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:02:32 crc kubenswrapper[4932]: I0321 10:02:32.702497 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:02:32 crc kubenswrapper[4932]: E0321 10:02:32.702837 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:02:33 crc kubenswrapper[4932]: I0321 10:02:33.257561 4932 scope.go:117] "RemoveContainer" containerID="fd04c6cd184e1053c36251806152f55dc58699a11aad8745a7eadcb2ff33cfcf" Mar 21 10:02:43 crc kubenswrapper[4932]: I0321 10:02:43.702841 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:02:43 crc kubenswrapper[4932]: E0321 10:02:43.704840 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:02:45 crc kubenswrapper[4932]: I0321 10:02:45.704435 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:02:46 crc kubenswrapper[4932]: I0321 10:02:46.464543 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3"} Mar 21 10:02:46 crc kubenswrapper[4932]: I0321 10:02:46.702270 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:02:46 crc kubenswrapper[4932]: E0321 10:02:46.702501 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:02:47 crc kubenswrapper[4932]: I0321 10:02:47.741282 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:02:47 crc kubenswrapper[4932]: I0321 10:02:47.741648 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:02:54 crc kubenswrapper[4932]: I0321 10:02:54.557192 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" exitCode=1 Mar 21 10:02:54 crc kubenswrapper[4932]: I0321 10:02:54.557297 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3"} Mar 21 10:02:54 crc kubenswrapper[4932]: I0321 10:02:54.557835 4932 scope.go:117] "RemoveContainer" containerID="85d7d811e663424aced1361a60b00b1e01655cb03360ffdae5dec22c1036d0b6" Mar 21 10:02:54 crc kubenswrapper[4932]: I0321 10:02:54.559156 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:02:54 crc kubenswrapper[4932]: E0321 10:02:54.559780 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:02:54 crc kubenswrapper[4932]: I0321 10:02:54.702409 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:02:55 crc kubenswrapper[4932]: I0321 10:02:55.570585 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6"} Mar 21 10:02:57 crc kubenswrapper[4932]: I0321 10:02:57.740580 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:02:57 crc kubenswrapper[4932]: I0321 10:02:57.741336 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:02:57 crc kubenswrapper[4932]: I0321 10:02:57.742804 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:02:57 crc kubenswrapper[4932]: E0321 10:02:57.743219 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:02:57 crc kubenswrapper[4932]: I0321 10:02:57.947986 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:02:57 crc kubenswrapper[4932]: I0321 10:02:57.949579 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:03:00 crc kubenswrapper[4932]: I0321 10:03:00.702505 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:03:00 crc kubenswrapper[4932]: E0321 10:03:00.703004 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:03:03 crc kubenswrapper[4932]: I0321 10:03:03.666564 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" exitCode=1 Mar 21 10:03:03 crc kubenswrapper[4932]: I0321 10:03:03.666637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6"} Mar 21 10:03:03 crc kubenswrapper[4932]: I0321 10:03:03.667034 4932 scope.go:117] "RemoveContainer" containerID="a6ee3b1aebacbd91a327bc793eadda77d9613b5a401d96f2e36a7cc6a144f632" Mar 21 10:03:03 crc kubenswrapper[4932]: I0321 10:03:03.667802 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:03:03 crc kubenswrapper[4932]: E0321 10:03:03.668068 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:03:07 crc kubenswrapper[4932]: I0321 10:03:07.947783 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:03:07 crc kubenswrapper[4932]: I0321 10:03:07.948968 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:03:07 crc kubenswrapper[4932]: I0321 10:03:07.949768 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:03:07 crc kubenswrapper[4932]: E0321 10:03:07.949981 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:03:09 crc kubenswrapper[4932]: I0321 10:03:09.703580 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:03:09 crc kubenswrapper[4932]: E0321 10:03:09.704772 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:03:12 crc kubenswrapper[4932]: I0321 10:03:12.702784 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:03:12 crc kubenswrapper[4932]: E0321 10:03:12.703486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:03:21 crc kubenswrapper[4932]: I0321 10:03:21.703394 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:03:21 crc kubenswrapper[4932]: I0321 10:03:21.704042 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:03:21 crc kubenswrapper[4932]: E0321 10:03:21.704264 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:03:21 crc kubenswrapper[4932]: E0321 10:03:21.704378 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:03:26 crc kubenswrapper[4932]: I0321 10:03:26.704004 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:03:26 crc kubenswrapper[4932]: E0321 10:03:26.705911 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:03:35 crc kubenswrapper[4932]: I0321 10:03:35.702946 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:03:35 crc kubenswrapper[4932]: I0321 10:03:35.703437 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:03:35 crc kubenswrapper[4932]: E0321 10:03:35.703610 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:03:35 crc kubenswrapper[4932]: E0321 10:03:35.703723 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:03:41 crc kubenswrapper[4932]: I0321 10:03:41.703235 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:03:41 crc kubenswrapper[4932]: E0321 10:03:41.704555 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:03:46 crc kubenswrapper[4932]: I0321 10:03:46.703096 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:03:46 crc kubenswrapper[4932]: E0321 10:03:46.703596 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:03:49 crc kubenswrapper[4932]: I0321 10:03:49.702750 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:03:49 crc kubenswrapper[4932]: E0321 10:03:49.703241 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:03:56 crc kubenswrapper[4932]: I0321 10:03:56.702588 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:03:56 crc kubenswrapper[4932]: E0321 10:03:56.703110 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.144246 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568124-2zhws"] Mar 21 10:04:00 crc kubenswrapper[4932]: E0321 10:04:00.145190 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24aa54d7-3784-4be4-a6a2-414ce29ae040" containerName="oc" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.145202 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="24aa54d7-3784-4be4-a6a2-414ce29ae040" containerName="oc" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.145413 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="24aa54d7-3784-4be4-a6a2-414ce29ae040" containerName="oc" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.146081 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568124-2zhws" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.148603 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.148690 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.148826 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.151654 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568124-2zhws"] Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.233617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbx2d\" (UniqueName: \"kubernetes.io/projected/1bb1008f-a18b-4e72-a85f-b20f3a9519fc-kube-api-access-vbx2d\") pod \"auto-csr-approver-29568124-2zhws\" (UID: \"1bb1008f-a18b-4e72-a85f-b20f3a9519fc\") " pod="openshift-infra/auto-csr-approver-29568124-2zhws" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.335204 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbx2d\" (UniqueName: \"kubernetes.io/projected/1bb1008f-a18b-4e72-a85f-b20f3a9519fc-kube-api-access-vbx2d\") pod \"auto-csr-approver-29568124-2zhws\" (UID: \"1bb1008f-a18b-4e72-a85f-b20f3a9519fc\") " pod="openshift-infra/auto-csr-approver-29568124-2zhws" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.377042 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbx2d\" (UniqueName: \"kubernetes.io/projected/1bb1008f-a18b-4e72-a85f-b20f3a9519fc-kube-api-access-vbx2d\") pod \"auto-csr-approver-29568124-2zhws\" (UID: \"1bb1008f-a18b-4e72-a85f-b20f3a9519fc\") " pod="openshift-infra/auto-csr-approver-29568124-2zhws" Mar 21 10:04:00 crc kubenswrapper[4932]: I0321 10:04:00.467720 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568124-2zhws" Mar 21 10:04:01 crc kubenswrapper[4932]: I0321 10:04:00.702522 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:04:01 crc kubenswrapper[4932]: E0321 10:04:00.703625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:04:01 crc kubenswrapper[4932]: I0321 10:04:00.916282 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568124-2zhws"] Mar 21 10:04:01 crc kubenswrapper[4932]: W0321 10:04:00.923330 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bb1008f_a18b_4e72_a85f_b20f3a9519fc.slice/crio-a24be842244cd8ad771013b798cb1289e8d8b39369ea42d83c90e8ebfe3a8341 WatchSource:0}: Error finding container a24be842244cd8ad771013b798cb1289e8d8b39369ea42d83c90e8ebfe3a8341: Status 404 returned error can't find the container with id a24be842244cd8ad771013b798cb1289e8d8b39369ea42d83c90e8ebfe3a8341 Mar 21 10:04:01 crc kubenswrapper[4932]: I0321 10:04:01.178610 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568124-2zhws" event={"ID":"1bb1008f-a18b-4e72-a85f-b20f3a9519fc","Type":"ContainerStarted","Data":"a24be842244cd8ad771013b798cb1289e8d8b39369ea42d83c90e8ebfe3a8341"} Mar 21 10:04:02 crc kubenswrapper[4932]: I0321 10:04:02.187819 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568124-2zhws" event={"ID":"1bb1008f-a18b-4e72-a85f-b20f3a9519fc","Type":"ContainerStarted","Data":"c8382eb6e0ac1f211c4af9a71fc16e4b4a2a3e2a4a432aa6e1f2f5b444480152"} Mar 21 10:04:02 crc kubenswrapper[4932]: I0321 10:04:02.209189 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568124-2zhws" podStartSLOduration=1.325214908 podStartE2EDuration="2.209166417s" podCreationTimestamp="2026-03-21 10:04:00 +0000 UTC" firstStartedPulling="2026-03-21 10:04:00.927316078 +0000 UTC m=+3944.522514347" lastFinishedPulling="2026-03-21 10:04:01.811267567 +0000 UTC m=+3945.406465856" observedRunningTime="2026-03-21 10:04:02.202016014 +0000 UTC m=+3945.797214283" watchObservedRunningTime="2026-03-21 10:04:02.209166417 +0000 UTC m=+3945.804364696" Mar 21 10:04:03 crc kubenswrapper[4932]: I0321 10:04:03.196533 4932 generic.go:334] "Generic (PLEG): container finished" podID="1bb1008f-a18b-4e72-a85f-b20f3a9519fc" containerID="c8382eb6e0ac1f211c4af9a71fc16e4b4a2a3e2a4a432aa6e1f2f5b444480152" exitCode=0 Mar 21 10:04:03 crc kubenswrapper[4932]: I0321 10:04:03.196571 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568124-2zhws" event={"ID":"1bb1008f-a18b-4e72-a85f-b20f3a9519fc","Type":"ContainerDied","Data":"c8382eb6e0ac1f211c4af9a71fc16e4b4a2a3e2a4a432aa6e1f2f5b444480152"} Mar 21 10:04:04 crc kubenswrapper[4932]: I0321 10:04:04.702929 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:04:04 crc kubenswrapper[4932]: E0321 10:04:04.703594 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:04:05 crc kubenswrapper[4932]: I0321 10:04:05.213940 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568124-2zhws" event={"ID":"1bb1008f-a18b-4e72-a85f-b20f3a9519fc","Type":"ContainerDied","Data":"a24be842244cd8ad771013b798cb1289e8d8b39369ea42d83c90e8ebfe3a8341"} Mar 21 10:04:05 crc kubenswrapper[4932]: I0321 10:04:05.214423 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24be842244cd8ad771013b798cb1289e8d8b39369ea42d83c90e8ebfe3a8341" Mar 21 10:04:05 crc kubenswrapper[4932]: I0321 10:04:05.226392 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568124-2zhws" Mar 21 10:04:05 crc kubenswrapper[4932]: I0321 10:04:05.336213 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbx2d\" (UniqueName: \"kubernetes.io/projected/1bb1008f-a18b-4e72-a85f-b20f3a9519fc-kube-api-access-vbx2d\") pod \"1bb1008f-a18b-4e72-a85f-b20f3a9519fc\" (UID: \"1bb1008f-a18b-4e72-a85f-b20f3a9519fc\") " Mar 21 10:04:05 crc kubenswrapper[4932]: I0321 10:04:05.428532 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb1008f-a18b-4e72-a85f-b20f3a9519fc-kube-api-access-vbx2d" (OuterVolumeSpecName: "kube-api-access-vbx2d") pod "1bb1008f-a18b-4e72-a85f-b20f3a9519fc" (UID: "1bb1008f-a18b-4e72-a85f-b20f3a9519fc"). InnerVolumeSpecName "kube-api-access-vbx2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:04:05 crc kubenswrapper[4932]: I0321 10:04:05.439144 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbx2d\" (UniqueName: \"kubernetes.io/projected/1bb1008f-a18b-4e72-a85f-b20f3a9519fc-kube-api-access-vbx2d\") on node \"crc\" DevicePath \"\"" Mar 21 10:04:06 crc kubenswrapper[4932]: I0321 10:04:06.223085 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568124-2zhws" Mar 21 10:04:06 crc kubenswrapper[4932]: I0321 10:04:06.302178 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568118-5t9gc"] Mar 21 10:04:06 crc kubenswrapper[4932]: I0321 10:04:06.312688 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568118-5t9gc"] Mar 21 10:04:07 crc kubenswrapper[4932]: I0321 10:04:07.713157 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f253ff-12ab-4142-a9ee-4069b26afc64" path="/var/lib/kubelet/pods/c5f253ff-12ab-4142-a9ee-4069b26afc64/volumes" Mar 21 10:04:08 crc kubenswrapper[4932]: I0321 10:04:08.702617 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:04:08 crc kubenswrapper[4932]: E0321 10:04:08.703380 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:04:11 crc kubenswrapper[4932]: I0321 10:04:11.703231 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:04:11 crc kubenswrapper[4932]: E0321 10:04:11.703824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:04:17 crc kubenswrapper[4932]: I0321 10:04:17.711707 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:04:17 crc kubenswrapper[4932]: E0321 10:04:17.712631 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:04:23 crc kubenswrapper[4932]: I0321 10:04:23.705090 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:04:23 crc kubenswrapper[4932]: E0321 10:04:23.714696 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:04:23 crc kubenswrapper[4932]: I0321 10:04:23.725655 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:04:23 crc kubenswrapper[4932]: E0321 10:04:23.731346 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:04:29 crc kubenswrapper[4932]: I0321 10:04:29.702882 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:04:29 crc kubenswrapper[4932]: E0321 10:04:29.703765 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:04:33 crc kubenswrapper[4932]: I0321 10:04:33.341751 4932 scope.go:117] "RemoveContainer" containerID="9ee931476ea7d2de1ecafe9cb5fc9ccb199be6f263341af2c4434d374a3240f0" Mar 21 10:04:34 crc kubenswrapper[4932]: I0321 10:04:34.703080 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:04:34 crc kubenswrapper[4932]: E0321 10:04:34.703563 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:04:35 crc kubenswrapper[4932]: I0321 10:04:35.703503 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:04:35 crc kubenswrapper[4932]: E0321 10:04:35.704045 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:04:41 crc kubenswrapper[4932]: I0321 10:04:41.703431 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:04:41 crc kubenswrapper[4932]: E0321 10:04:41.704338 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:04:48 crc kubenswrapper[4932]: I0321 10:04:48.703525 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:04:48 crc kubenswrapper[4932]: E0321 10:04:48.704544 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:04:49 crc kubenswrapper[4932]: I0321 10:04:49.703021 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:04:49 crc kubenswrapper[4932]: E0321 10:04:49.703575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:04:56 crc kubenswrapper[4932]: I0321 10:04:56.703705 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:04:56 crc kubenswrapper[4932]: E0321 10:04:56.704920 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:05:02 crc kubenswrapper[4932]: I0321 10:05:02.702973 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:05:02 crc kubenswrapper[4932]: E0321 10:05:02.703957 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:05:05 crc kubenswrapper[4932]: I0321 10:05:05.703027 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:05:06 crc kubenswrapper[4932]: I0321 10:05:06.775171 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"bf6b421b3a548e404a9c4ab92c16ea4a62bc206206bc934f9da7ea8a97518f3d"} Mar 21 10:05:07 crc kubenswrapper[4932]: I0321 10:05:07.708631 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:05:07 crc kubenswrapper[4932]: E0321 10:05:07.709559 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:05:14 crc kubenswrapper[4932]: I0321 10:05:14.704265 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:05:14 crc kubenswrapper[4932]: E0321 10:05:14.706029 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:05:19 crc kubenswrapper[4932]: I0321 10:05:19.703082 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:05:19 crc kubenswrapper[4932]: E0321 10:05:19.704035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:05:25 crc kubenswrapper[4932]: I0321 10:05:25.703119 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:05:25 crc kubenswrapper[4932]: E0321 10:05:25.703863 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:05:34 crc kubenswrapper[4932]: I0321 10:05:34.703093 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:05:34 crc kubenswrapper[4932]: E0321 10:05:34.703854 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:05:40 crc kubenswrapper[4932]: I0321 10:05:40.702812 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:05:40 crc kubenswrapper[4932]: E0321 10:05:40.703575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:05:47 crc kubenswrapper[4932]: I0321 10:05:47.708686 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:05:47 crc kubenswrapper[4932]: E0321 10:05:47.710548 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:05:54 crc kubenswrapper[4932]: I0321 10:05:54.705068 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:05:54 crc kubenswrapper[4932]: E0321 10:05:54.706741 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.149384 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568126-2f2n7"] Mar 21 10:06:00 crc kubenswrapper[4932]: E0321 10:06:00.150635 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb1008f-a18b-4e72-a85f-b20f3a9519fc" containerName="oc" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.150659 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb1008f-a18b-4e72-a85f-b20f3a9519fc" containerName="oc" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.151063 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb1008f-a18b-4e72-a85f-b20f3a9519fc" containerName="oc" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.152195 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568126-2f2n7" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.154934 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.155060 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.155231 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.162866 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568126-2f2n7"] Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.319049 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pxfm\" (UniqueName: \"kubernetes.io/projected/5385fb18-9c86-4807-917b-ed05fbfcec54-kube-api-access-2pxfm\") pod \"auto-csr-approver-29568126-2f2n7\" (UID: \"5385fb18-9c86-4807-917b-ed05fbfcec54\") " pod="openshift-infra/auto-csr-approver-29568126-2f2n7" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.421055 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pxfm\" (UniqueName: \"kubernetes.io/projected/5385fb18-9c86-4807-917b-ed05fbfcec54-kube-api-access-2pxfm\") pod \"auto-csr-approver-29568126-2f2n7\" (UID: \"5385fb18-9c86-4807-917b-ed05fbfcec54\") " pod="openshift-infra/auto-csr-approver-29568126-2f2n7" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.440436 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pxfm\" (UniqueName: \"kubernetes.io/projected/5385fb18-9c86-4807-917b-ed05fbfcec54-kube-api-access-2pxfm\") pod \"auto-csr-approver-29568126-2f2n7\" (UID: \"5385fb18-9c86-4807-917b-ed05fbfcec54\") " pod="openshift-infra/auto-csr-approver-29568126-2f2n7" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.485252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568126-2f2n7" Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.926969 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568126-2f2n7"] Mar 21 10:06:00 crc kubenswrapper[4932]: W0321 10:06:00.928432 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5385fb18_9c86_4807_917b_ed05fbfcec54.slice/crio-72136b4335c945baa1720e5a7096c7d92e97573bc761c59a46ebd4f7bb3fa665 WatchSource:0}: Error finding container 72136b4335c945baa1720e5a7096c7d92e97573bc761c59a46ebd4f7bb3fa665: Status 404 returned error can't find the container with id 72136b4335c945baa1720e5a7096c7d92e97573bc761c59a46ebd4f7bb3fa665 Mar 21 10:06:00 crc kubenswrapper[4932]: I0321 10:06:00.931637 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 10:06:01 crc kubenswrapper[4932]: I0321 10:06:01.244974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568126-2f2n7" event={"ID":"5385fb18-9c86-4807-917b-ed05fbfcec54","Type":"ContainerStarted","Data":"72136b4335c945baa1720e5a7096c7d92e97573bc761c59a46ebd4f7bb3fa665"} Mar 21 10:06:02 crc kubenswrapper[4932]: I0321 10:06:02.702944 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:06:02 crc kubenswrapper[4932]: E0321 10:06:02.703705 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:06:03 crc kubenswrapper[4932]: I0321 10:06:03.264339 4932 generic.go:334] "Generic (PLEG): container finished" podID="5385fb18-9c86-4807-917b-ed05fbfcec54" containerID="5cbf2418369ceeff4136302cdebe7a2a5526d432eb06703c65139985ada0a0c4" exitCode=0 Mar 21 10:06:03 crc kubenswrapper[4932]: I0321 10:06:03.264397 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568126-2f2n7" event={"ID":"5385fb18-9c86-4807-917b-ed05fbfcec54","Type":"ContainerDied","Data":"5cbf2418369ceeff4136302cdebe7a2a5526d432eb06703c65139985ada0a0c4"} Mar 21 10:06:04 crc kubenswrapper[4932]: I0321 10:06:04.617841 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568126-2f2n7" Mar 21 10:06:04 crc kubenswrapper[4932]: I0321 10:06:04.702911 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pxfm\" (UniqueName: \"kubernetes.io/projected/5385fb18-9c86-4807-917b-ed05fbfcec54-kube-api-access-2pxfm\") pod \"5385fb18-9c86-4807-917b-ed05fbfcec54\" (UID: \"5385fb18-9c86-4807-917b-ed05fbfcec54\") " Mar 21 10:06:04 crc kubenswrapper[4932]: I0321 10:06:04.708688 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5385fb18-9c86-4807-917b-ed05fbfcec54-kube-api-access-2pxfm" (OuterVolumeSpecName: "kube-api-access-2pxfm") pod "5385fb18-9c86-4807-917b-ed05fbfcec54" (UID: "5385fb18-9c86-4807-917b-ed05fbfcec54"). InnerVolumeSpecName "kube-api-access-2pxfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:06:04 crc kubenswrapper[4932]: I0321 10:06:04.806089 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pxfm\" (UniqueName: \"kubernetes.io/projected/5385fb18-9c86-4807-917b-ed05fbfcec54-kube-api-access-2pxfm\") on node \"crc\" DevicePath \"\"" Mar 21 10:06:05 crc kubenswrapper[4932]: I0321 10:06:05.284249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568126-2f2n7" event={"ID":"5385fb18-9c86-4807-917b-ed05fbfcec54","Type":"ContainerDied","Data":"72136b4335c945baa1720e5a7096c7d92e97573bc761c59a46ebd4f7bb3fa665"} Mar 21 10:06:05 crc kubenswrapper[4932]: I0321 10:06:05.284308 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568126-2f2n7" Mar 21 10:06:05 crc kubenswrapper[4932]: I0321 10:06:05.284398 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72136b4335c945baa1720e5a7096c7d92e97573bc761c59a46ebd4f7bb3fa665" Mar 21 10:06:05 crc kubenswrapper[4932]: I0321 10:06:05.688972 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568120-wt6rh"] Mar 21 10:06:05 crc kubenswrapper[4932]: I0321 10:06:05.696279 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568120-wt6rh"] Mar 21 10:06:05 crc kubenswrapper[4932]: I0321 10:06:05.713317 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba1a616-59a2-40cd-98e3-b051cdf06182" path="/var/lib/kubelet/pods/3ba1a616-59a2-40cd-98e3-b051cdf06182/volumes" Mar 21 10:06:09 crc kubenswrapper[4932]: I0321 10:06:09.702801 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:06:09 crc kubenswrapper[4932]: E0321 10:06:09.703519 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:06:17 crc kubenswrapper[4932]: I0321 10:06:17.001392 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-fdnwp" podUID="53b6ef69-81be-4a78-9f72-c0464ac4b003" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.82:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 10:06:17 crc kubenswrapper[4932]: E0321 10:06:17.053324 4932 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.104s" Mar 21 10:06:17 crc kubenswrapper[4932]: I0321 10:06:17.054617 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:06:17 crc kubenswrapper[4932]: E0321 10:06:17.054816 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:06:20 crc kubenswrapper[4932]: I0321 10:06:20.702705 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:06:20 crc kubenswrapper[4932]: E0321 10:06:20.703294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:06:27 crc kubenswrapper[4932]: I0321 10:06:27.713678 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:06:27 crc kubenswrapper[4932]: E0321 10:06:27.714961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:06:33 crc kubenswrapper[4932]: I0321 10:06:33.434812 4932 scope.go:117] "RemoveContainer" containerID="156d9f5165dc901b0275d4ee5ef332436703f81215c05a1e25d019a6c177db03" Mar 21 10:06:35 crc kubenswrapper[4932]: I0321 10:06:35.702476 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:06:35 crc kubenswrapper[4932]: E0321 10:06:35.703159 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:06:40 crc kubenswrapper[4932]: I0321 10:06:40.703068 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:06:40 crc kubenswrapper[4932]: E0321 10:06:40.703807 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:06:46 crc kubenswrapper[4932]: I0321 10:06:46.703503 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:06:46 crc kubenswrapper[4932]: E0321 10:06:46.704218 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:06:52 crc kubenswrapper[4932]: I0321 10:06:52.702574 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:06:52 crc kubenswrapper[4932]: E0321 10:06:52.703295 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.492743 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gv9pm"] Mar 21 10:06:55 crc kubenswrapper[4932]: E0321 10:06:55.493624 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5385fb18-9c86-4807-917b-ed05fbfcec54" containerName="oc" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.493640 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5385fb18-9c86-4807-917b-ed05fbfcec54" containerName="oc" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.493963 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5385fb18-9c86-4807-917b-ed05fbfcec54" containerName="oc" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.495717 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.502279 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gv9pm"] Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.568240 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbnk\" (UniqueName: \"kubernetes.io/projected/52131835-08f0-4ff6-9f53-a00024ab875c-kube-api-access-rpbnk\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.568333 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-utilities\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.568567 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-catalog-content\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.671018 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-utilities\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.671074 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-catalog-content\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.671273 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpbnk\" (UniqueName: \"kubernetes.io/projected/52131835-08f0-4ff6-9f53-a00024ab875c-kube-api-access-rpbnk\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.671555 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-utilities\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.671633 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-catalog-content\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.683381 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gwjhh"] Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.685291 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.696112 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpbnk\" (UniqueName: \"kubernetes.io/projected/52131835-08f0-4ff6-9f53-a00024ab875c-kube-api-access-rpbnk\") pod \"redhat-operators-gv9pm\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.701369 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwjhh"] Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.773658 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-utilities\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.774044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-catalog-content\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.774226 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgc5\" (UniqueName: \"kubernetes.io/projected/24065cea-67cd-42ac-85e6-8f83bf2894ee-kube-api-access-6mgc5\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.819381 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.876053 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-utilities\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.876136 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-catalog-content\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.876255 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mgc5\" (UniqueName: \"kubernetes.io/projected/24065cea-67cd-42ac-85e6-8f83bf2894ee-kube-api-access-6mgc5\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.876564 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-utilities\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.876831 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-catalog-content\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:55 crc kubenswrapper[4932]: I0321 10:06:55.894003 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mgc5\" (UniqueName: \"kubernetes.io/projected/24065cea-67cd-42ac-85e6-8f83bf2894ee-kube-api-access-6mgc5\") pod \"redhat-marketplace-gwjhh\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:56 crc kubenswrapper[4932]: I0321 10:06:56.046416 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:06:56 crc kubenswrapper[4932]: I0321 10:06:56.478357 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gv9pm"] Mar 21 10:06:56 crc kubenswrapper[4932]: I0321 10:06:56.614030 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwjhh"] Mar 21 10:06:57 crc kubenswrapper[4932]: I0321 10:06:57.395384 4932 generic.go:334] "Generic (PLEG): container finished" podID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerID="a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225" exitCode=0 Mar 21 10:06:57 crc kubenswrapper[4932]: I0321 10:06:57.395429 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwjhh" event={"ID":"24065cea-67cd-42ac-85e6-8f83bf2894ee","Type":"ContainerDied","Data":"a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225"} Mar 21 10:06:57 crc kubenswrapper[4932]: I0321 10:06:57.395804 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwjhh" event={"ID":"24065cea-67cd-42ac-85e6-8f83bf2894ee","Type":"ContainerStarted","Data":"80b6d6de0ab6d528ecc5406daf68c2aa6af2485fef1481c127f438553735016f"} Mar 21 10:06:57 crc kubenswrapper[4932]: I0321 10:06:57.398929 4932 generic.go:334] "Generic (PLEG): container finished" podID="52131835-08f0-4ff6-9f53-a00024ab875c" containerID="2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf" exitCode=0 Mar 21 10:06:57 crc kubenswrapper[4932]: I0321 10:06:57.398979 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9pm" event={"ID":"52131835-08f0-4ff6-9f53-a00024ab875c","Type":"ContainerDied","Data":"2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf"} Mar 21 10:06:57 crc kubenswrapper[4932]: I0321 10:06:57.399099 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9pm" event={"ID":"52131835-08f0-4ff6-9f53-a00024ab875c","Type":"ContainerStarted","Data":"2160a0b227577be3ad3362b8104a4029245dcfbc6d75fd01b374035ef7fb162f"} Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.092887 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jh4ln"] Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.095506 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.102820 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jh4ln"] Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.123307 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-utilities\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.123408 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-catalog-content\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.123464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5rd\" (UniqueName: \"kubernetes.io/projected/18ebe396-f189-4161-92a4-cab05f2a127f-kube-api-access-qs5rd\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.225602 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-utilities\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.226020 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-catalog-content\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.226219 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs5rd\" (UniqueName: \"kubernetes.io/projected/18ebe396-f189-4161-92a4-cab05f2a127f-kube-api-access-qs5rd\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.226112 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-utilities\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.226642 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-catalog-content\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.244906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs5rd\" (UniqueName: \"kubernetes.io/projected/18ebe396-f189-4161-92a4-cab05f2a127f-kube-api-access-qs5rd\") pod \"community-operators-jh4ln\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.420309 4932 generic.go:334] "Generic (PLEG): container finished" podID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerID="fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e" exitCode=0 Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.420389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwjhh" event={"ID":"24065cea-67cd-42ac-85e6-8f83bf2894ee","Type":"ContainerDied","Data":"fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e"} Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.424785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9pm" event={"ID":"52131835-08f0-4ff6-9f53-a00024ab875c","Type":"ContainerStarted","Data":"d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90"} Mar 21 10:06:58 crc kubenswrapper[4932]: I0321 10:06:58.460485 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:06:59 crc kubenswrapper[4932]: I0321 10:06:59.091684 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jh4ln"] Mar 21 10:06:59 crc kubenswrapper[4932]: W0321 10:06:59.096499 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18ebe396_f189_4161_92a4_cab05f2a127f.slice/crio-7aed2e603ea99ecdf899ffed016da1bf420fffec936c2eea07a84df47c3105e0 WatchSource:0}: Error finding container 7aed2e603ea99ecdf899ffed016da1bf420fffec936c2eea07a84df47c3105e0: Status 404 returned error can't find the container with id 7aed2e603ea99ecdf899ffed016da1bf420fffec936c2eea07a84df47c3105e0 Mar 21 10:06:59 crc kubenswrapper[4932]: I0321 10:06:59.436250 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwjhh" event={"ID":"24065cea-67cd-42ac-85e6-8f83bf2894ee","Type":"ContainerStarted","Data":"903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195"} Mar 21 10:06:59 crc kubenswrapper[4932]: I0321 10:06:59.437892 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh4ln" event={"ID":"18ebe396-f189-4161-92a4-cab05f2a127f","Type":"ContainerStarted","Data":"e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2"} Mar 21 10:06:59 crc kubenswrapper[4932]: I0321 10:06:59.437943 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh4ln" event={"ID":"18ebe396-f189-4161-92a4-cab05f2a127f","Type":"ContainerStarted","Data":"7aed2e603ea99ecdf899ffed016da1bf420fffec936c2eea07a84df47c3105e0"} Mar 21 10:06:59 crc kubenswrapper[4932]: I0321 10:06:59.459295 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gwjhh" podStartSLOduration=2.9909792570000002 podStartE2EDuration="4.459272641s" podCreationTimestamp="2026-03-21 10:06:55 +0000 UTC" firstStartedPulling="2026-03-21 10:06:57.398782377 +0000 UTC m=+4120.993980646" lastFinishedPulling="2026-03-21 10:06:58.867075761 +0000 UTC m=+4122.462274030" observedRunningTime="2026-03-21 10:06:59.451587511 +0000 UTC m=+4123.046785780" watchObservedRunningTime="2026-03-21 10:06:59.459272641 +0000 UTC m=+4123.054470910" Mar 21 10:07:01 crc kubenswrapper[4932]: I0321 10:07:01.457971 4932 generic.go:334] "Generic (PLEG): container finished" podID="18ebe396-f189-4161-92a4-cab05f2a127f" containerID="e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2" exitCode=0 Mar 21 10:07:01 crc kubenswrapper[4932]: I0321 10:07:01.458628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh4ln" event={"ID":"18ebe396-f189-4161-92a4-cab05f2a127f","Type":"ContainerDied","Data":"e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2"} Mar 21 10:07:01 crc kubenswrapper[4932]: I0321 10:07:01.467746 4932 generic.go:334] "Generic (PLEG): container finished" podID="52131835-08f0-4ff6-9f53-a00024ab875c" containerID="d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90" exitCode=0 Mar 21 10:07:01 crc kubenswrapper[4932]: I0321 10:07:01.467782 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9pm" event={"ID":"52131835-08f0-4ff6-9f53-a00024ab875c","Type":"ContainerDied","Data":"d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90"} Mar 21 10:07:01 crc kubenswrapper[4932]: I0321 10:07:01.704171 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:07:01 crc kubenswrapper[4932]: E0321 10:07:01.704495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:07:03 crc kubenswrapper[4932]: I0321 10:07:03.494158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9pm" event={"ID":"52131835-08f0-4ff6-9f53-a00024ab875c","Type":"ContainerStarted","Data":"149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f"} Mar 21 10:07:03 crc kubenswrapper[4932]: I0321 10:07:03.497412 4932 generic.go:334] "Generic (PLEG): container finished" podID="18ebe396-f189-4161-92a4-cab05f2a127f" containerID="00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009" exitCode=0 Mar 21 10:07:03 crc kubenswrapper[4932]: I0321 10:07:03.497460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh4ln" event={"ID":"18ebe396-f189-4161-92a4-cab05f2a127f","Type":"ContainerDied","Data":"00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009"} Mar 21 10:07:03 crc kubenswrapper[4932]: I0321 10:07:03.522748 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gv9pm" podStartSLOduration=3.578891874 podStartE2EDuration="8.522726109s" podCreationTimestamp="2026-03-21 10:06:55 +0000 UTC" firstStartedPulling="2026-03-21 10:06:57.401080979 +0000 UTC m=+4120.996279248" lastFinishedPulling="2026-03-21 10:07:02.344915214 +0000 UTC m=+4125.940113483" observedRunningTime="2026-03-21 10:07:03.512803337 +0000 UTC m=+4127.108001606" watchObservedRunningTime="2026-03-21 10:07:03.522726109 +0000 UTC m=+4127.117924378" Mar 21 10:07:04 crc kubenswrapper[4932]: I0321 10:07:04.509249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh4ln" event={"ID":"18ebe396-f189-4161-92a4-cab05f2a127f","Type":"ContainerStarted","Data":"d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a"} Mar 21 10:07:04 crc kubenswrapper[4932]: I0321 10:07:04.528205 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jh4ln" podStartSLOduration=3.940954663 podStartE2EDuration="6.528184399s" podCreationTimestamp="2026-03-21 10:06:58 +0000 UTC" firstStartedPulling="2026-03-21 10:07:01.46146338 +0000 UTC m=+4125.056661649" lastFinishedPulling="2026-03-21 10:07:04.048693116 +0000 UTC m=+4127.643891385" observedRunningTime="2026-03-21 10:07:04.526847068 +0000 UTC m=+4128.122045347" watchObservedRunningTime="2026-03-21 10:07:04.528184399 +0000 UTC m=+4128.123382668" Mar 21 10:07:05 crc kubenswrapper[4932]: I0321 10:07:05.819805 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:07:05 crc kubenswrapper[4932]: I0321 10:07:05.819880 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:07:06 crc kubenswrapper[4932]: I0321 10:07:06.047245 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:07:06 crc kubenswrapper[4932]: I0321 10:07:06.047661 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:07:06 crc kubenswrapper[4932]: I0321 10:07:06.093476 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:07:06 crc kubenswrapper[4932]: I0321 10:07:06.863458 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gv9pm" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="registry-server" probeResult="failure" output=< Mar 21 10:07:06 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 10:07:06 crc kubenswrapper[4932]: > Mar 21 10:07:07 crc kubenswrapper[4932]: I0321 10:07:07.071899 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:07:07 crc kubenswrapper[4932]: I0321 10:07:07.708953 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:07:07 crc kubenswrapper[4932]: E0321 10:07:07.709799 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:07:08 crc kubenswrapper[4932]: I0321 10:07:08.460890 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:07:08 crc kubenswrapper[4932]: I0321 10:07:08.460946 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:07:08 crc kubenswrapper[4932]: I0321 10:07:08.476757 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwjhh"] Mar 21 10:07:08 crc kubenswrapper[4932]: I0321 10:07:08.507844 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:07:08 crc kubenswrapper[4932]: I0321 10:07:08.541095 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gwjhh" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerName="registry-server" containerID="cri-o://903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195" gracePeriod=2 Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.097370 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.250112 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-catalog-content\") pod \"24065cea-67cd-42ac-85e6-8f83bf2894ee\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.250274 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mgc5\" (UniqueName: \"kubernetes.io/projected/24065cea-67cd-42ac-85e6-8f83bf2894ee-kube-api-access-6mgc5\") pod \"24065cea-67cd-42ac-85e6-8f83bf2894ee\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.250474 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-utilities\") pod \"24065cea-67cd-42ac-85e6-8f83bf2894ee\" (UID: \"24065cea-67cd-42ac-85e6-8f83bf2894ee\") " Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.251389 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-utilities" (OuterVolumeSpecName: "utilities") pod "24065cea-67cd-42ac-85e6-8f83bf2894ee" (UID: "24065cea-67cd-42ac-85e6-8f83bf2894ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.251806 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.256986 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24065cea-67cd-42ac-85e6-8f83bf2894ee-kube-api-access-6mgc5" (OuterVolumeSpecName: "kube-api-access-6mgc5") pod "24065cea-67cd-42ac-85e6-8f83bf2894ee" (UID: "24065cea-67cd-42ac-85e6-8f83bf2894ee"). InnerVolumeSpecName "kube-api-access-6mgc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.281720 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24065cea-67cd-42ac-85e6-8f83bf2894ee" (UID: "24065cea-67cd-42ac-85e6-8f83bf2894ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.353893 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24065cea-67cd-42ac-85e6-8f83bf2894ee-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.353944 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mgc5\" (UniqueName: \"kubernetes.io/projected/24065cea-67cd-42ac-85e6-8f83bf2894ee-kube-api-access-6mgc5\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.554750 4932 generic.go:334] "Generic (PLEG): container finished" podID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerID="903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195" exitCode=0 Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.554827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwjhh" event={"ID":"24065cea-67cd-42ac-85e6-8f83bf2894ee","Type":"ContainerDied","Data":"903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195"} Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.554890 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwjhh" event={"ID":"24065cea-67cd-42ac-85e6-8f83bf2894ee","Type":"ContainerDied","Data":"80b6d6de0ab6d528ecc5406daf68c2aa6af2485fef1481c127f438553735016f"} Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.554883 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwjhh" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.554915 4932 scope.go:117] "RemoveContainer" containerID="903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.579249 4932 scope.go:117] "RemoveContainer" containerID="fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.589586 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwjhh"] Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.599179 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwjhh"] Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.605294 4932 scope.go:117] "RemoveContainer" containerID="a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.679818 4932 scope.go:117] "RemoveContainer" containerID="903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195" Mar 21 10:07:09 crc kubenswrapper[4932]: E0321 10:07:09.680446 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195\": container with ID starting with 903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195 not found: ID does not exist" containerID="903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.680503 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195"} err="failed to get container status \"903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195\": rpc error: code = NotFound desc = could not find container \"903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195\": container with ID starting with 903f37b8e0198d13d95e6142cce1de17c995fb802159cbbe56054412b5571195 not found: ID does not exist" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.680536 4932 scope.go:117] "RemoveContainer" containerID="fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e" Mar 21 10:07:09 crc kubenswrapper[4932]: E0321 10:07:09.681075 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e\": container with ID starting with fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e not found: ID does not exist" containerID="fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.681114 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e"} err="failed to get container status \"fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e\": rpc error: code = NotFound desc = could not find container \"fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e\": container with ID starting with fec3d1d308ec6f52975e0c6381d537e3ad7b1f074cf1b91edf25dcabdb8b7a9e not found: ID does not exist" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.681135 4932 scope.go:117] "RemoveContainer" containerID="a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225" Mar 21 10:07:09 crc kubenswrapper[4932]: E0321 10:07:09.681524 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225\": container with ID starting with a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225 not found: ID does not exist" containerID="a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.681552 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225"} err="failed to get container status \"a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225\": rpc error: code = NotFound desc = could not find container \"a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225\": container with ID starting with a17beacf1d04e1594d092ff498ebe391375f86ae712b87f1d63b37160aace225 not found: ID does not exist" Mar 21 10:07:09 crc kubenswrapper[4932]: I0321 10:07:09.714727 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" path="/var/lib/kubelet/pods/24065cea-67cd-42ac-85e6-8f83bf2894ee/volumes" Mar 21 10:07:16 crc kubenswrapper[4932]: I0321 10:07:16.703713 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:07:16 crc kubenswrapper[4932]: E0321 10:07:16.704483 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:07:16 crc kubenswrapper[4932]: I0321 10:07:16.868454 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gv9pm" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="registry-server" probeResult="failure" output=< Mar 21 10:07:16 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 10:07:16 crc kubenswrapper[4932]: > Mar 21 10:07:18 crc kubenswrapper[4932]: I0321 10:07:18.511364 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:07:18 crc kubenswrapper[4932]: I0321 10:07:18.555897 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jh4ln"] Mar 21 10:07:18 crc kubenswrapper[4932]: I0321 10:07:18.625057 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jh4ln" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" containerName="registry-server" containerID="cri-o://d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a" gracePeriod=2 Mar 21 10:07:18 crc kubenswrapper[4932]: I0321 10:07:18.702655 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:07:18 crc kubenswrapper[4932]: E0321 10:07:18.702914 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.099702 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.261846 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-utilities\") pod \"18ebe396-f189-4161-92a4-cab05f2a127f\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.261961 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-catalog-content\") pod \"18ebe396-f189-4161-92a4-cab05f2a127f\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.262029 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs5rd\" (UniqueName: \"kubernetes.io/projected/18ebe396-f189-4161-92a4-cab05f2a127f-kube-api-access-qs5rd\") pod \"18ebe396-f189-4161-92a4-cab05f2a127f\" (UID: \"18ebe396-f189-4161-92a4-cab05f2a127f\") " Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.262735 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-utilities" (OuterVolumeSpecName: "utilities") pod "18ebe396-f189-4161-92a4-cab05f2a127f" (UID: "18ebe396-f189-4161-92a4-cab05f2a127f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.268235 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ebe396-f189-4161-92a4-cab05f2a127f-kube-api-access-qs5rd" (OuterVolumeSpecName: "kube-api-access-qs5rd") pod "18ebe396-f189-4161-92a4-cab05f2a127f" (UID: "18ebe396-f189-4161-92a4-cab05f2a127f"). InnerVolumeSpecName "kube-api-access-qs5rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.317400 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18ebe396-f189-4161-92a4-cab05f2a127f" (UID: "18ebe396-f189-4161-92a4-cab05f2a127f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.364278 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.364619 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ebe396-f189-4161-92a4-cab05f2a127f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.364692 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs5rd\" (UniqueName: \"kubernetes.io/projected/18ebe396-f189-4161-92a4-cab05f2a127f-kube-api-access-qs5rd\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.635225 4932 generic.go:334] "Generic (PLEG): container finished" podID="18ebe396-f189-4161-92a4-cab05f2a127f" containerID="d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a" exitCode=0 Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.635276 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jh4ln" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.635294 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh4ln" event={"ID":"18ebe396-f189-4161-92a4-cab05f2a127f","Type":"ContainerDied","Data":"d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a"} Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.636170 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh4ln" event={"ID":"18ebe396-f189-4161-92a4-cab05f2a127f","Type":"ContainerDied","Data":"7aed2e603ea99ecdf899ffed016da1bf420fffec936c2eea07a84df47c3105e0"} Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.636200 4932 scope.go:117] "RemoveContainer" containerID="d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.655232 4932 scope.go:117] "RemoveContainer" containerID="00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.668566 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jh4ln"] Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.679635 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jh4ln"] Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.692730 4932 scope.go:117] "RemoveContainer" containerID="e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.712765 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" path="/var/lib/kubelet/pods/18ebe396-f189-4161-92a4-cab05f2a127f/volumes" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.733696 4932 scope.go:117] "RemoveContainer" containerID="d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a" Mar 21 10:07:19 crc kubenswrapper[4932]: E0321 10:07:19.734116 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a\": container with ID starting with d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a not found: ID does not exist" containerID="d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.734148 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a"} err="failed to get container status \"d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a\": rpc error: code = NotFound desc = could not find container \"d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a\": container with ID starting with d1112954e2618e521e191ed52ccb7e623262135df428ae8046a888bbbd8cf25a not found: ID does not exist" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.734170 4932 scope.go:117] "RemoveContainer" containerID="00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009" Mar 21 10:07:19 crc kubenswrapper[4932]: E0321 10:07:19.734546 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009\": container with ID starting with 00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009 not found: ID does not exist" containerID="00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.734569 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009"} err="failed to get container status \"00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009\": rpc error: code = NotFound desc = could not find container \"00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009\": container with ID starting with 00c29f6476a742332c41e28425f71059a4e4efbcaf5cb96caa3753e6fb362009 not found: ID does not exist" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.734582 4932 scope.go:117] "RemoveContainer" containerID="e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2" Mar 21 10:07:19 crc kubenswrapper[4932]: E0321 10:07:19.734985 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2\": container with ID starting with e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2 not found: ID does not exist" containerID="e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2" Mar 21 10:07:19 crc kubenswrapper[4932]: I0321 10:07:19.735032 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2"} err="failed to get container status \"e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2\": rpc error: code = NotFound desc = could not find container \"e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2\": container with ID starting with e9db98a90807d3971af4dc5f9e6e457eff738a9744701514d956fd2b002ddae2 not found: ID does not exist" Mar 21 10:07:26 crc kubenswrapper[4932]: I0321 10:07:26.867080 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gv9pm" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="registry-server" probeResult="failure" output=< Mar 21 10:07:26 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 10:07:26 crc kubenswrapper[4932]: > Mar 21 10:07:27 crc kubenswrapper[4932]: I0321 10:07:27.709219 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:07:27 crc kubenswrapper[4932]: E0321 10:07:27.709878 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:07:30 crc kubenswrapper[4932]: I0321 10:07:30.225226 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:07:30 crc kubenswrapper[4932]: I0321 10:07:30.225569 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:07:30 crc kubenswrapper[4932]: I0321 10:07:30.703149 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:07:30 crc kubenswrapper[4932]: E0321 10:07:30.703686 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:07:35 crc kubenswrapper[4932]: I0321 10:07:35.863116 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:07:35 crc kubenswrapper[4932]: I0321 10:07:35.912547 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:07:36 crc kubenswrapper[4932]: I0321 10:07:36.098028 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gv9pm"] Mar 21 10:07:37 crc kubenswrapper[4932]: I0321 10:07:37.780626 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gv9pm" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="registry-server" containerID="cri-o://149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f" gracePeriod=2 Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.287742 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.448280 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-catalog-content\") pod \"52131835-08f0-4ff6-9f53-a00024ab875c\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.448504 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-utilities\") pod \"52131835-08f0-4ff6-9f53-a00024ab875c\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.448741 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpbnk\" (UniqueName: \"kubernetes.io/projected/52131835-08f0-4ff6-9f53-a00024ab875c-kube-api-access-rpbnk\") pod \"52131835-08f0-4ff6-9f53-a00024ab875c\" (UID: \"52131835-08f0-4ff6-9f53-a00024ab875c\") " Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.449371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-utilities" (OuterVolumeSpecName: "utilities") pod "52131835-08f0-4ff6-9f53-a00024ab875c" (UID: "52131835-08f0-4ff6-9f53-a00024ab875c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.454853 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52131835-08f0-4ff6-9f53-a00024ab875c-kube-api-access-rpbnk" (OuterVolumeSpecName: "kube-api-access-rpbnk") pod "52131835-08f0-4ff6-9f53-a00024ab875c" (UID: "52131835-08f0-4ff6-9f53-a00024ab875c"). InnerVolumeSpecName "kube-api-access-rpbnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.551393 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpbnk\" (UniqueName: \"kubernetes.io/projected/52131835-08f0-4ff6-9f53-a00024ab875c-kube-api-access-rpbnk\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.551425 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.580877 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52131835-08f0-4ff6-9f53-a00024ab875c" (UID: "52131835-08f0-4ff6-9f53-a00024ab875c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.652970 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52131835-08f0-4ff6-9f53-a00024ab875c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.791460 4932 generic.go:334] "Generic (PLEG): container finished" podID="52131835-08f0-4ff6-9f53-a00024ab875c" containerID="149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f" exitCode=0 Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.791498 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gv9pm" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.791504 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9pm" event={"ID":"52131835-08f0-4ff6-9f53-a00024ab875c","Type":"ContainerDied","Data":"149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f"} Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.791537 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gv9pm" event={"ID":"52131835-08f0-4ff6-9f53-a00024ab875c","Type":"ContainerDied","Data":"2160a0b227577be3ad3362b8104a4029245dcfbc6d75fd01b374035ef7fb162f"} Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.791562 4932 scope.go:117] "RemoveContainer" containerID="149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.816521 4932 scope.go:117] "RemoveContainer" containerID="d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.821187 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gv9pm"] Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.829491 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gv9pm"] Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.838867 4932 scope.go:117] "RemoveContainer" containerID="2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.879284 4932 scope.go:117] "RemoveContainer" containerID="149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f" Mar 21 10:07:38 crc kubenswrapper[4932]: E0321 10:07:38.879828 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f\": container with ID starting with 149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f not found: ID does not exist" containerID="149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.879863 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f"} err="failed to get container status \"149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f\": rpc error: code = NotFound desc = could not find container \"149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f\": container with ID starting with 149e1b657387dfdbde4a68c0047130d96935cfed2d652829f2ce9681a538103f not found: ID does not exist" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.879883 4932 scope.go:117] "RemoveContainer" containerID="d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90" Mar 21 10:07:38 crc kubenswrapper[4932]: E0321 10:07:38.880257 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90\": container with ID starting with d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90 not found: ID does not exist" containerID="d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.880501 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90"} err="failed to get container status \"d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90\": rpc error: code = NotFound desc = could not find container \"d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90\": container with ID starting with d019915488074315028857d0b19a1bb46419b2e69c4b1e7afe596c2c9dac0b90 not found: ID does not exist" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.880584 4932 scope.go:117] "RemoveContainer" containerID="2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf" Mar 21 10:07:38 crc kubenswrapper[4932]: E0321 10:07:38.880939 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf\": container with ID starting with 2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf not found: ID does not exist" containerID="2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf" Mar 21 10:07:38 crc kubenswrapper[4932]: I0321 10:07:38.880965 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf"} err="failed to get container status \"2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf\": rpc error: code = NotFound desc = could not find container \"2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf\": container with ID starting with 2c955c0e991a3a9183ff648833d2e67309430ef2da2f7eacaeaa36940a4b6bcf not found: ID does not exist" Mar 21 10:07:39 crc kubenswrapper[4932]: I0321 10:07:39.715263 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" path="/var/lib/kubelet/pods/52131835-08f0-4ff6-9f53-a00024ab875c/volumes" Mar 21 10:07:41 crc kubenswrapper[4932]: I0321 10:07:41.703503 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:07:41 crc kubenswrapper[4932]: E0321 10:07:41.704575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:07:42 crc kubenswrapper[4932]: I0321 10:07:42.703019 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:07:42 crc kubenswrapper[4932]: E0321 10:07:42.703543 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:07:52 crc kubenswrapper[4932]: I0321 10:07:52.702679 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:07:52 crc kubenswrapper[4932]: E0321 10:07:52.703402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:07:57 crc kubenswrapper[4932]: I0321 10:07:57.708852 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:07:57 crc kubenswrapper[4932]: E0321 10:07:57.709655 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.136809 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568128-z5t4v"] Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137505 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerName="extract-content" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137518 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerName="extract-content" Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137538 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137545 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137562 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="extract-utilities" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137568 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="extract-utilities" Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137576 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137582 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137603 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="extract-content" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137611 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="extract-content" Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137622 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137629 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137645 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" containerName="extract-content" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137652 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" containerName="extract-content" Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137662 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerName="extract-utilities" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137669 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerName="extract-utilities" Mar 21 10:08:00 crc kubenswrapper[4932]: E0321 10:08:00.137699 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" containerName="extract-utilities" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137716 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" containerName="extract-utilities" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137917 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="24065cea-67cd-42ac-85e6-8f83bf2894ee" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137931 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ebe396-f189-4161-92a4-cab05f2a127f" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.137941 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="52131835-08f0-4ff6-9f53-a00024ab875c" containerName="registry-server" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.138765 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568128-z5t4v" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.140546 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.141032 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.141102 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.145028 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568128-z5t4v"] Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.190845 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7tr9\" (UniqueName: \"kubernetes.io/projected/011c0ddf-ad3f-456b-b41b-998be328c24e-kube-api-access-x7tr9\") pod \"auto-csr-approver-29568128-z5t4v\" (UID: \"011c0ddf-ad3f-456b-b41b-998be328c24e\") " pod="openshift-infra/auto-csr-approver-29568128-z5t4v" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.225176 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.225238 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.292834 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7tr9\" (UniqueName: \"kubernetes.io/projected/011c0ddf-ad3f-456b-b41b-998be328c24e-kube-api-access-x7tr9\") pod \"auto-csr-approver-29568128-z5t4v\" (UID: \"011c0ddf-ad3f-456b-b41b-998be328c24e\") " pod="openshift-infra/auto-csr-approver-29568128-z5t4v" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.312318 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7tr9\" (UniqueName: \"kubernetes.io/projected/011c0ddf-ad3f-456b-b41b-998be328c24e-kube-api-access-x7tr9\") pod \"auto-csr-approver-29568128-z5t4v\" (UID: \"011c0ddf-ad3f-456b-b41b-998be328c24e\") " pod="openshift-infra/auto-csr-approver-29568128-z5t4v" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.459602 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568128-z5t4v" Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.883057 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568128-z5t4v"] Mar 21 10:08:00 crc kubenswrapper[4932]: I0321 10:08:00.981966 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568128-z5t4v" event={"ID":"011c0ddf-ad3f-456b-b41b-998be328c24e","Type":"ContainerStarted","Data":"ee5031f21ddd5508dfea24a31f375efce200f749e61f533d0cd4d26e76252e23"} Mar 21 10:08:04 crc kubenswrapper[4932]: I0321 10:08:04.702470 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:08:05 crc kubenswrapper[4932]: I0321 10:08:05.016003 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489"} Mar 21 10:08:05 crc kubenswrapper[4932]: I0321 10:08:05.018007 4932 generic.go:334] "Generic (PLEG): container finished" podID="011c0ddf-ad3f-456b-b41b-998be328c24e" containerID="c05059af076c932b0fae2bae7f72a5250f6788b9fa6c85fedbfb08682002c939" exitCode=0 Mar 21 10:08:05 crc kubenswrapper[4932]: I0321 10:08:05.018059 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568128-z5t4v" event={"ID":"011c0ddf-ad3f-456b-b41b-998be328c24e","Type":"ContainerDied","Data":"c05059af076c932b0fae2bae7f72a5250f6788b9fa6c85fedbfb08682002c939"} Mar 21 10:08:06 crc kubenswrapper[4932]: I0321 10:08:06.317397 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568128-z5t4v" Mar 21 10:08:06 crc kubenswrapper[4932]: I0321 10:08:06.417450 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7tr9\" (UniqueName: \"kubernetes.io/projected/011c0ddf-ad3f-456b-b41b-998be328c24e-kube-api-access-x7tr9\") pod \"011c0ddf-ad3f-456b-b41b-998be328c24e\" (UID: \"011c0ddf-ad3f-456b-b41b-998be328c24e\") " Mar 21 10:08:06 crc kubenswrapper[4932]: I0321 10:08:06.423579 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011c0ddf-ad3f-456b-b41b-998be328c24e-kube-api-access-x7tr9" (OuterVolumeSpecName: "kube-api-access-x7tr9") pod "011c0ddf-ad3f-456b-b41b-998be328c24e" (UID: "011c0ddf-ad3f-456b-b41b-998be328c24e"). InnerVolumeSpecName "kube-api-access-x7tr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:08:06 crc kubenswrapper[4932]: I0321 10:08:06.519734 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7tr9\" (UniqueName: \"kubernetes.io/projected/011c0ddf-ad3f-456b-b41b-998be328c24e-kube-api-access-x7tr9\") on node \"crc\" DevicePath \"\"" Mar 21 10:08:07 crc kubenswrapper[4932]: I0321 10:08:07.053183 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568128-z5t4v" event={"ID":"011c0ddf-ad3f-456b-b41b-998be328c24e","Type":"ContainerDied","Data":"ee5031f21ddd5508dfea24a31f375efce200f749e61f533d0cd4d26e76252e23"} Mar 21 10:08:07 crc kubenswrapper[4932]: I0321 10:08:07.053767 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee5031f21ddd5508dfea24a31f375efce200f749e61f533d0cd4d26e76252e23" Mar 21 10:08:07 crc kubenswrapper[4932]: I0321 10:08:07.053307 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568128-z5t4v" Mar 21 10:08:07 crc kubenswrapper[4932]: I0321 10:08:07.400326 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568122-vs97z"] Mar 21 10:08:07 crc kubenswrapper[4932]: I0321 10:08:07.409152 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568122-vs97z"] Mar 21 10:08:07 crc kubenswrapper[4932]: I0321 10:08:07.716146 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24aa54d7-3784-4be4-a6a2-414ce29ae040" path="/var/lib/kubelet/pods/24aa54d7-3784-4be4-a6a2-414ce29ae040/volumes" Mar 21 10:08:07 crc kubenswrapper[4932]: I0321 10:08:07.740693 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:08:07 crc kubenswrapper[4932]: I0321 10:08:07.740793 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:08:08 crc kubenswrapper[4932]: I0321 10:08:08.702417 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:08:10 crc kubenswrapper[4932]: I0321 10:08:10.079483 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d"} Mar 21 10:08:14 crc kubenswrapper[4932]: I0321 10:08:14.148806 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" exitCode=1 Mar 21 10:08:14 crc kubenswrapper[4932]: I0321 10:08:14.148912 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489"} Mar 21 10:08:14 crc kubenswrapper[4932]: I0321 10:08:14.149113 4932 scope.go:117] "RemoveContainer" containerID="3054eef4d2a5178f5cc6e80c3fba90b7f1eb559a665d8d84817d3c01553534f3" Mar 21 10:08:14 crc kubenswrapper[4932]: I0321 10:08:14.151203 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:08:14 crc kubenswrapper[4932]: E0321 10:08:14.155392 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:08:17 crc kubenswrapper[4932]: I0321 10:08:17.740410 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:08:17 crc kubenswrapper[4932]: I0321 10:08:17.741019 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:08:17 crc kubenswrapper[4932]: I0321 10:08:17.741885 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:08:17 crc kubenswrapper[4932]: E0321 10:08:17.742093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:08:17 crc kubenswrapper[4932]: I0321 10:08:17.948610 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:08:17 crc kubenswrapper[4932]: I0321 10:08:17.948665 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:08:19 crc kubenswrapper[4932]: I0321 10:08:19.197430 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" exitCode=1 Mar 21 10:08:19 crc kubenswrapper[4932]: I0321 10:08:19.197528 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d"} Mar 21 10:08:19 crc kubenswrapper[4932]: I0321 10:08:19.197869 4932 scope.go:117] "RemoveContainer" containerID="9448100cf9f4f22f66b5e1955fb9fd88bccf1b69a6519bae861564e2be8052d6" Mar 21 10:08:19 crc kubenswrapper[4932]: I0321 10:08:19.198401 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:08:19 crc kubenswrapper[4932]: E0321 10:08:19.198712 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:08:27 crc kubenswrapper[4932]: I0321 10:08:27.948094 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:08:27 crc kubenswrapper[4932]: I0321 10:08:27.948639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:08:27 crc kubenswrapper[4932]: I0321 10:08:27.949546 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:08:27 crc kubenswrapper[4932]: E0321 10:08:27.949779 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:08:28 crc kubenswrapper[4932]: I0321 10:08:28.703596 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:08:28 crc kubenswrapper[4932]: E0321 10:08:28.704141 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:08:30 crc kubenswrapper[4932]: I0321 10:08:30.225926 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:08:30 crc kubenswrapper[4932]: I0321 10:08:30.225992 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:08:30 crc kubenswrapper[4932]: I0321 10:08:30.226032 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 10:08:30 crc kubenswrapper[4932]: I0321 10:08:30.226868 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf6b421b3a548e404a9c4ab92c16ea4a62bc206206bc934f9da7ea8a97518f3d"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 10:08:30 crc kubenswrapper[4932]: I0321 10:08:30.226937 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://bf6b421b3a548e404a9c4ab92c16ea4a62bc206206bc934f9da7ea8a97518f3d" gracePeriod=600 Mar 21 10:08:31 crc kubenswrapper[4932]: I0321 10:08:31.324494 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="bf6b421b3a548e404a9c4ab92c16ea4a62bc206206bc934f9da7ea8a97518f3d" exitCode=0 Mar 21 10:08:31 crc kubenswrapper[4932]: I0321 10:08:31.324602 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"bf6b421b3a548e404a9c4ab92c16ea4a62bc206206bc934f9da7ea8a97518f3d"} Mar 21 10:08:31 crc kubenswrapper[4932]: I0321 10:08:31.325467 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90"} Mar 21 10:08:31 crc kubenswrapper[4932]: I0321 10:08:31.325503 4932 scope.go:117] "RemoveContainer" containerID="1912af34aa2150b6e3536ca54b3599f4bbad5424cab9a50b30db3be1bda91765" Mar 21 10:08:33 crc kubenswrapper[4932]: I0321 10:08:33.563488 4932 scope.go:117] "RemoveContainer" containerID="247426f2579aa4be0ef11e1f93dd9b7210ef4b4f94138929e2f11d9797f39259" Mar 21 10:08:39 crc kubenswrapper[4932]: I0321 10:08:39.702534 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:08:39 crc kubenswrapper[4932]: I0321 10:08:39.703006 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:08:39 crc kubenswrapper[4932]: E0321 10:08:39.703163 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:08:39 crc kubenswrapper[4932]: E0321 10:08:39.703178 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:08:51 crc kubenswrapper[4932]: I0321 10:08:51.702998 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:08:51 crc kubenswrapper[4932]: E0321 10:08:51.703759 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:08:54 crc kubenswrapper[4932]: I0321 10:08:54.702864 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:08:54 crc kubenswrapper[4932]: E0321 10:08:54.703416 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:09:02 crc kubenswrapper[4932]: I0321 10:09:02.703036 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:09:02 crc kubenswrapper[4932]: E0321 10:09:02.703869 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:09:05 crc kubenswrapper[4932]: I0321 10:09:05.702290 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:09:05 crc kubenswrapper[4932]: E0321 10:09:05.702787 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:09:16 crc kubenswrapper[4932]: I0321 10:09:16.703271 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:09:16 crc kubenswrapper[4932]: E0321 10:09:16.703851 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:09:16 crc kubenswrapper[4932]: I0321 10:09:16.703957 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:09:16 crc kubenswrapper[4932]: E0321 10:09:16.704147 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:09:28 crc kubenswrapper[4932]: I0321 10:09:28.703152 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:09:28 crc kubenswrapper[4932]: E0321 10:09:28.703999 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:09:28 crc kubenswrapper[4932]: I0321 10:09:28.704037 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:09:28 crc kubenswrapper[4932]: E0321 10:09:28.704287 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:09:41 crc kubenswrapper[4932]: I0321 10:09:41.703212 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:09:41 crc kubenswrapper[4932]: E0321 10:09:41.704107 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:09:43 crc kubenswrapper[4932]: I0321 10:09:43.704102 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:09:43 crc kubenswrapper[4932]: E0321 10:09:43.704506 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:09:55 crc kubenswrapper[4932]: I0321 10:09:55.703193 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:09:55 crc kubenswrapper[4932]: E0321 10:09:55.704089 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:09:57 crc kubenswrapper[4932]: I0321 10:09:57.708468 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:09:57 crc kubenswrapper[4932]: E0321 10:09:57.709030 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.147281 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568130-m8jtm"] Mar 21 10:10:00 crc kubenswrapper[4932]: E0321 10:10:00.148079 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011c0ddf-ad3f-456b-b41b-998be328c24e" containerName="oc" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.148096 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="011c0ddf-ad3f-456b-b41b-998be328c24e" containerName="oc" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.148290 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="011c0ddf-ad3f-456b-b41b-998be328c24e" containerName="oc" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.148945 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.150948 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.151051 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.151673 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.156031 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568130-m8jtm"] Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.301017 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkrj8\" (UniqueName: \"kubernetes.io/projected/71158171-5d75-42eb-8d7d-b9a87fa84866-kube-api-access-bkrj8\") pod \"auto-csr-approver-29568130-m8jtm\" (UID: \"71158171-5d75-42eb-8d7d-b9a87fa84866\") " pod="openshift-infra/auto-csr-approver-29568130-m8jtm" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.402842 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkrj8\" (UniqueName: \"kubernetes.io/projected/71158171-5d75-42eb-8d7d-b9a87fa84866-kube-api-access-bkrj8\") pod \"auto-csr-approver-29568130-m8jtm\" (UID: \"71158171-5d75-42eb-8d7d-b9a87fa84866\") " pod="openshift-infra/auto-csr-approver-29568130-m8jtm" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.421551 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkrj8\" (UniqueName: \"kubernetes.io/projected/71158171-5d75-42eb-8d7d-b9a87fa84866-kube-api-access-bkrj8\") pod \"auto-csr-approver-29568130-m8jtm\" (UID: \"71158171-5d75-42eb-8d7d-b9a87fa84866\") " pod="openshift-infra/auto-csr-approver-29568130-m8jtm" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.471254 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" Mar 21 10:10:00 crc kubenswrapper[4932]: I0321 10:10:00.913849 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568130-m8jtm"] Mar 21 10:10:01 crc kubenswrapper[4932]: I0321 10:10:01.408014 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" event={"ID":"71158171-5d75-42eb-8d7d-b9a87fa84866","Type":"ContainerStarted","Data":"c9f1c40aebe3c24e7b928772ddb2b4c57d08528400b3346b03b055d020aa4df4"} Mar 21 10:10:02 crc kubenswrapper[4932]: I0321 10:10:02.416709 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" event={"ID":"71158171-5d75-42eb-8d7d-b9a87fa84866","Type":"ContainerStarted","Data":"ca6cbc6b3f151fee5ffe8372f501df69d0910792643165002ca8b5e1dcf47942"} Mar 21 10:10:02 crc kubenswrapper[4932]: I0321 10:10:02.441069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" podStartSLOduration=1.324720011 podStartE2EDuration="2.441050262s" podCreationTimestamp="2026-03-21 10:10:00 +0000 UTC" firstStartedPulling="2026-03-21 10:10:00.927895095 +0000 UTC m=+4304.523093364" lastFinishedPulling="2026-03-21 10:10:02.044225346 +0000 UTC m=+4305.639423615" observedRunningTime="2026-03-21 10:10:02.429388947 +0000 UTC m=+4306.024587236" watchObservedRunningTime="2026-03-21 10:10:02.441050262 +0000 UTC m=+4306.036248531" Mar 21 10:10:03 crc kubenswrapper[4932]: I0321 10:10:03.426739 4932 generic.go:334] "Generic (PLEG): container finished" podID="71158171-5d75-42eb-8d7d-b9a87fa84866" containerID="ca6cbc6b3f151fee5ffe8372f501df69d0910792643165002ca8b5e1dcf47942" exitCode=0 Mar 21 10:10:03 crc kubenswrapper[4932]: I0321 10:10:03.426842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" event={"ID":"71158171-5d75-42eb-8d7d-b9a87fa84866","Type":"ContainerDied","Data":"ca6cbc6b3f151fee5ffe8372f501df69d0910792643165002ca8b5e1dcf47942"} Mar 21 10:10:04 crc kubenswrapper[4932]: I0321 10:10:04.802128 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" Mar 21 10:10:04 crc kubenswrapper[4932]: I0321 10:10:04.895118 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkrj8\" (UniqueName: \"kubernetes.io/projected/71158171-5d75-42eb-8d7d-b9a87fa84866-kube-api-access-bkrj8\") pod \"71158171-5d75-42eb-8d7d-b9a87fa84866\" (UID: \"71158171-5d75-42eb-8d7d-b9a87fa84866\") " Mar 21 10:10:04 crc kubenswrapper[4932]: I0321 10:10:04.903634 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71158171-5d75-42eb-8d7d-b9a87fa84866-kube-api-access-bkrj8" (OuterVolumeSpecName: "kube-api-access-bkrj8") pod "71158171-5d75-42eb-8d7d-b9a87fa84866" (UID: "71158171-5d75-42eb-8d7d-b9a87fa84866"). InnerVolumeSpecName "kube-api-access-bkrj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:10:04 crc kubenswrapper[4932]: I0321 10:10:04.998070 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkrj8\" (UniqueName: \"kubernetes.io/projected/71158171-5d75-42eb-8d7d-b9a87fa84866-kube-api-access-bkrj8\") on node \"crc\" DevicePath \"\"" Mar 21 10:10:05 crc kubenswrapper[4932]: I0321 10:10:05.451537 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" event={"ID":"71158171-5d75-42eb-8d7d-b9a87fa84866","Type":"ContainerDied","Data":"c9f1c40aebe3c24e7b928772ddb2b4c57d08528400b3346b03b055d020aa4df4"} Mar 21 10:10:05 crc kubenswrapper[4932]: I0321 10:10:05.451892 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9f1c40aebe3c24e7b928772ddb2b4c57d08528400b3346b03b055d020aa4df4" Mar 21 10:10:05 crc kubenswrapper[4932]: I0321 10:10:05.451610 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568130-m8jtm" Mar 21 10:10:05 crc kubenswrapper[4932]: I0321 10:10:05.498639 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568124-2zhws"] Mar 21 10:10:05 crc kubenswrapper[4932]: I0321 10:10:05.507392 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568124-2zhws"] Mar 21 10:10:05 crc kubenswrapper[4932]: I0321 10:10:05.714545 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb1008f-a18b-4e72-a85f-b20f3a9519fc" path="/var/lib/kubelet/pods/1bb1008f-a18b-4e72-a85f-b20f3a9519fc/volumes" Mar 21 10:10:10 crc kubenswrapper[4932]: I0321 10:10:10.703030 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:10:10 crc kubenswrapper[4932]: E0321 10:10:10.703922 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:10:11 crc kubenswrapper[4932]: I0321 10:10:11.703326 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:10:11 crc kubenswrapper[4932]: E0321 10:10:11.703803 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.493608 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvsz4"] Mar 21 10:10:25 crc kubenswrapper[4932]: E0321 10:10:25.494516 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71158171-5d75-42eb-8d7d-b9a87fa84866" containerName="oc" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.494533 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="71158171-5d75-42eb-8d7d-b9a87fa84866" containerName="oc" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.494740 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="71158171-5d75-42eb-8d7d-b9a87fa84866" containerName="oc" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.496544 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.509285 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvsz4"] Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.600941 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-utilities\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.601000 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-catalog-content\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.601024 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vmnr\" (UniqueName: \"kubernetes.io/projected/60703df1-dd41-4923-92fe-302644dbb762-kube-api-access-2vmnr\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.702903 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.703068 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-utilities\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.703112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-catalog-content\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.703129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vmnr\" (UniqueName: \"kubernetes.io/projected/60703df1-dd41-4923-92fe-302644dbb762-kube-api-access-2vmnr\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: E0321 10:10:25.703131 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.703578 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-utilities\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.703778 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-catalog-content\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.726894 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vmnr\" (UniqueName: \"kubernetes.io/projected/60703df1-dd41-4923-92fe-302644dbb762-kube-api-access-2vmnr\") pod \"certified-operators-xvsz4\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:25 crc kubenswrapper[4932]: I0321 10:10:25.827885 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:26 crc kubenswrapper[4932]: I0321 10:10:26.332535 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvsz4"] Mar 21 10:10:26 crc kubenswrapper[4932]: I0321 10:10:26.702610 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:10:26 crc kubenswrapper[4932]: E0321 10:10:26.703564 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:10:27 crc kubenswrapper[4932]: I0321 10:10:27.126908 4932 generic.go:334] "Generic (PLEG): container finished" podID="60703df1-dd41-4923-92fe-302644dbb762" containerID="06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776" exitCode=0 Mar 21 10:10:27 crc kubenswrapper[4932]: I0321 10:10:27.126960 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvsz4" event={"ID":"60703df1-dd41-4923-92fe-302644dbb762","Type":"ContainerDied","Data":"06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776"} Mar 21 10:10:27 crc kubenswrapper[4932]: I0321 10:10:27.127250 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvsz4" event={"ID":"60703df1-dd41-4923-92fe-302644dbb762","Type":"ContainerStarted","Data":"1082af525b5704cee8a42fc620bc4ca5761ef31b4f174a446c458b728ff5e872"} Mar 21 10:10:28 crc kubenswrapper[4932]: I0321 10:10:28.136236 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvsz4" event={"ID":"60703df1-dd41-4923-92fe-302644dbb762","Type":"ContainerStarted","Data":"7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58"} Mar 21 10:10:29 crc kubenswrapper[4932]: I0321 10:10:29.146313 4932 generic.go:334] "Generic (PLEG): container finished" podID="60703df1-dd41-4923-92fe-302644dbb762" containerID="7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58" exitCode=0 Mar 21 10:10:29 crc kubenswrapper[4932]: I0321 10:10:29.146417 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvsz4" event={"ID":"60703df1-dd41-4923-92fe-302644dbb762","Type":"ContainerDied","Data":"7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58"} Mar 21 10:10:30 crc kubenswrapper[4932]: I0321 10:10:30.164122 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvsz4" event={"ID":"60703df1-dd41-4923-92fe-302644dbb762","Type":"ContainerStarted","Data":"8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b"} Mar 21 10:10:30 crc kubenswrapper[4932]: I0321 10:10:30.182914 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvsz4" podStartSLOduration=2.4245903269999998 podStartE2EDuration="5.182894625s" podCreationTimestamp="2026-03-21 10:10:25 +0000 UTC" firstStartedPulling="2026-03-21 10:10:27.128983107 +0000 UTC m=+4330.724181396" lastFinishedPulling="2026-03-21 10:10:29.887287425 +0000 UTC m=+4333.482485694" observedRunningTime="2026-03-21 10:10:30.180288843 +0000 UTC m=+4333.775487112" watchObservedRunningTime="2026-03-21 10:10:30.182894625 +0000 UTC m=+4333.778092894" Mar 21 10:10:30 crc kubenswrapper[4932]: I0321 10:10:30.225562 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:10:30 crc kubenswrapper[4932]: I0321 10:10:30.225645 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:10:33 crc kubenswrapper[4932]: I0321 10:10:33.663434 4932 scope.go:117] "RemoveContainer" containerID="c8382eb6e0ac1f211c4af9a71fc16e4b4a2a3e2a4a432aa6e1f2f5b444480152" Mar 21 10:10:35 crc kubenswrapper[4932]: I0321 10:10:35.828552 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:35 crc kubenswrapper[4932]: I0321 10:10:35.829181 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:35 crc kubenswrapper[4932]: I0321 10:10:35.870590 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:36 crc kubenswrapper[4932]: I0321 10:10:36.265429 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:36 crc kubenswrapper[4932]: I0321 10:10:36.314399 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvsz4"] Mar 21 10:10:36 crc kubenswrapper[4932]: I0321 10:10:36.703797 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:10:36 crc kubenswrapper[4932]: E0321 10:10:36.704241 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:10:38 crc kubenswrapper[4932]: I0321 10:10:38.231518 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvsz4" podUID="60703df1-dd41-4923-92fe-302644dbb762" containerName="registry-server" containerID="cri-o://8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b" gracePeriod=2 Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.027308 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.076738 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vmnr\" (UniqueName: \"kubernetes.io/projected/60703df1-dd41-4923-92fe-302644dbb762-kube-api-access-2vmnr\") pod \"60703df1-dd41-4923-92fe-302644dbb762\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.076817 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-catalog-content\") pod \"60703df1-dd41-4923-92fe-302644dbb762\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.076949 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-utilities\") pod \"60703df1-dd41-4923-92fe-302644dbb762\" (UID: \"60703df1-dd41-4923-92fe-302644dbb762\") " Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.077970 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-utilities" (OuterVolumeSpecName: "utilities") pod "60703df1-dd41-4923-92fe-302644dbb762" (UID: "60703df1-dd41-4923-92fe-302644dbb762"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.082742 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60703df1-dd41-4923-92fe-302644dbb762-kube-api-access-2vmnr" (OuterVolumeSpecName: "kube-api-access-2vmnr") pod "60703df1-dd41-4923-92fe-302644dbb762" (UID: "60703df1-dd41-4923-92fe-302644dbb762"). InnerVolumeSpecName "kube-api-access-2vmnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.135022 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60703df1-dd41-4923-92fe-302644dbb762" (UID: "60703df1-dd41-4923-92fe-302644dbb762"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.179793 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vmnr\" (UniqueName: \"kubernetes.io/projected/60703df1-dd41-4923-92fe-302644dbb762-kube-api-access-2vmnr\") on node \"crc\" DevicePath \"\"" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.179837 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.179851 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60703df1-dd41-4923-92fe-302644dbb762-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.242240 4932 generic.go:334] "Generic (PLEG): container finished" podID="60703df1-dd41-4923-92fe-302644dbb762" containerID="8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b" exitCode=0 Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.242278 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvsz4" event={"ID":"60703df1-dd41-4923-92fe-302644dbb762","Type":"ContainerDied","Data":"8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b"} Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.242305 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvsz4" event={"ID":"60703df1-dd41-4923-92fe-302644dbb762","Type":"ContainerDied","Data":"1082af525b5704cee8a42fc620bc4ca5761ef31b4f174a446c458b728ff5e872"} Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.242327 4932 scope.go:117] "RemoveContainer" containerID="8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.242467 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvsz4" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.275397 4932 scope.go:117] "RemoveContainer" containerID="7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.281988 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvsz4"] Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.290582 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvsz4"] Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.297004 4932 scope.go:117] "RemoveContainer" containerID="06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.353311 4932 scope.go:117] "RemoveContainer" containerID="8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b" Mar 21 10:10:39 crc kubenswrapper[4932]: E0321 10:10:39.353847 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b\": container with ID starting with 8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b not found: ID does not exist" containerID="8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.353890 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b"} err="failed to get container status \"8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b\": rpc error: code = NotFound desc = could not find container \"8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b\": container with ID starting with 8d8e3e376e35b77f84ac795b0c702abb062759cb69076597245dd829fcc1306b not found: ID does not exist" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.353918 4932 scope.go:117] "RemoveContainer" containerID="7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58" Mar 21 10:10:39 crc kubenswrapper[4932]: E0321 10:10:39.354331 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58\": container with ID starting with 7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58 not found: ID does not exist" containerID="7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.354371 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58"} err="failed to get container status \"7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58\": rpc error: code = NotFound desc = could not find container \"7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58\": container with ID starting with 7a7644277645f9d4c4684bd38922beb01a5edd4ddb2c2ba458a76ee7e93f1a58 not found: ID does not exist" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.354386 4932 scope.go:117] "RemoveContainer" containerID="06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776" Mar 21 10:10:39 crc kubenswrapper[4932]: E0321 10:10:39.354634 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776\": container with ID starting with 06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776 not found: ID does not exist" containerID="06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.354659 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776"} err="failed to get container status \"06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776\": rpc error: code = NotFound desc = could not find container \"06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776\": container with ID starting with 06e9369701fd16941198099f7e4ae431b87549361452af9ccb22ba43d51f1776 not found: ID does not exist" Mar 21 10:10:39 crc kubenswrapper[4932]: I0321 10:10:39.713964 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60703df1-dd41-4923-92fe-302644dbb762" path="/var/lib/kubelet/pods/60703df1-dd41-4923-92fe-302644dbb762/volumes" Mar 21 10:10:41 crc kubenswrapper[4932]: I0321 10:10:41.703126 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:10:41 crc kubenswrapper[4932]: E0321 10:10:41.703709 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:10:48 crc kubenswrapper[4932]: I0321 10:10:48.702660 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:10:48 crc kubenswrapper[4932]: E0321 10:10:48.703234 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:10:54 crc kubenswrapper[4932]: I0321 10:10:54.702474 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:10:54 crc kubenswrapper[4932]: E0321 10:10:54.703157 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:11:00 crc kubenswrapper[4932]: I0321 10:11:00.225849 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:11:00 crc kubenswrapper[4932]: I0321 10:11:00.226558 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:11:00 crc kubenswrapper[4932]: I0321 10:11:00.702557 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:11:00 crc kubenswrapper[4932]: E0321 10:11:00.702961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:11:08 crc kubenswrapper[4932]: I0321 10:11:08.702601 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:11:08 crc kubenswrapper[4932]: E0321 10:11:08.703204 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:11:13 crc kubenswrapper[4932]: I0321 10:11:13.702887 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:11:13 crc kubenswrapper[4932]: E0321 10:11:13.703742 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:11:20 crc kubenswrapper[4932]: I0321 10:11:20.702941 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:11:20 crc kubenswrapper[4932]: E0321 10:11:20.703743 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:11:24 crc kubenswrapper[4932]: I0321 10:11:24.702657 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:11:24 crc kubenswrapper[4932]: E0321 10:11:24.703396 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.225633 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.226191 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.226243 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.226949 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.227011 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" gracePeriod=600 Mar 21 10:11:30 crc kubenswrapper[4932]: E0321 10:11:30.373061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.788715 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" exitCode=0 Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.789025 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90"} Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.789169 4932 scope.go:117] "RemoveContainer" containerID="bf6b421b3a548e404a9c4ab92c16ea4a62bc206206bc934f9da7ea8a97518f3d" Mar 21 10:11:30 crc kubenswrapper[4932]: I0321 10:11:30.790621 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:11:30 crc kubenswrapper[4932]: E0321 10:11:30.791184 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:11:35 crc kubenswrapper[4932]: I0321 10:11:35.703771 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:11:35 crc kubenswrapper[4932]: E0321 10:11:35.704899 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:11:36 crc kubenswrapper[4932]: I0321 10:11:36.702813 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:11:36 crc kubenswrapper[4932]: E0321 10:11:36.703096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:11:42 crc kubenswrapper[4932]: I0321 10:11:42.702888 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:11:42 crc kubenswrapper[4932]: E0321 10:11:42.703790 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:11:46 crc kubenswrapper[4932]: I0321 10:11:46.704209 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:11:46 crc kubenswrapper[4932]: E0321 10:11:46.705480 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:11:49 crc kubenswrapper[4932]: I0321 10:11:49.702392 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:11:49 crc kubenswrapper[4932]: E0321 10:11:49.703129 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:11:53 crc kubenswrapper[4932]: I0321 10:11:53.703383 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:11:53 crc kubenswrapper[4932]: E0321 10:11:53.704361 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:11:57 crc kubenswrapper[4932]: I0321 10:11:57.709633 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:11:57 crc kubenswrapper[4932]: E0321 10:11:57.710495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.156228 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568132-b7h7s"] Mar 21 10:12:00 crc kubenswrapper[4932]: E0321 10:12:00.159006 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60703df1-dd41-4923-92fe-302644dbb762" containerName="registry-server" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.159174 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="60703df1-dd41-4923-92fe-302644dbb762" containerName="registry-server" Mar 21 10:12:00 crc kubenswrapper[4932]: E0321 10:12:00.159303 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60703df1-dd41-4923-92fe-302644dbb762" containerName="extract-content" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.159413 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="60703df1-dd41-4923-92fe-302644dbb762" containerName="extract-content" Mar 21 10:12:00 crc kubenswrapper[4932]: E0321 10:12:00.159507 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60703df1-dd41-4923-92fe-302644dbb762" containerName="extract-utilities" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.159594 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="60703df1-dd41-4923-92fe-302644dbb762" containerName="extract-utilities" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.160042 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="60703df1-dd41-4923-92fe-302644dbb762" containerName="registry-server" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.161430 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568132-b7h7s" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.164834 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.165311 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.165558 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.172883 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568132-b7h7s"] Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.316917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfwrp\" (UniqueName: \"kubernetes.io/projected/58686ce9-e7df-457f-bb8b-90df9387e154-kube-api-access-mfwrp\") pod \"auto-csr-approver-29568132-b7h7s\" (UID: \"58686ce9-e7df-457f-bb8b-90df9387e154\") " pod="openshift-infra/auto-csr-approver-29568132-b7h7s" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.418638 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfwrp\" (UniqueName: \"kubernetes.io/projected/58686ce9-e7df-457f-bb8b-90df9387e154-kube-api-access-mfwrp\") pod \"auto-csr-approver-29568132-b7h7s\" (UID: \"58686ce9-e7df-457f-bb8b-90df9387e154\") " pod="openshift-infra/auto-csr-approver-29568132-b7h7s" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.436922 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfwrp\" (UniqueName: \"kubernetes.io/projected/58686ce9-e7df-457f-bb8b-90df9387e154-kube-api-access-mfwrp\") pod \"auto-csr-approver-29568132-b7h7s\" (UID: \"58686ce9-e7df-457f-bb8b-90df9387e154\") " pod="openshift-infra/auto-csr-approver-29568132-b7h7s" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.491910 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568132-b7h7s" Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.922611 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568132-b7h7s"] Mar 21 10:12:00 crc kubenswrapper[4932]: I0321 10:12:00.932478 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 10:12:01 crc kubenswrapper[4932]: I0321 10:12:01.042884 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568132-b7h7s" event={"ID":"58686ce9-e7df-457f-bb8b-90df9387e154","Type":"ContainerStarted","Data":"38e2685322b79c1cd900c99cc656bd4637f08cbdf573601c70e6acb9faa0a434"} Mar 21 10:12:03 crc kubenswrapper[4932]: I0321 10:12:03.062851 4932 generic.go:334] "Generic (PLEG): container finished" podID="58686ce9-e7df-457f-bb8b-90df9387e154" containerID="cdf09de020add96e7074e2c014f6477ab4c74f8186463a1f2d81a2b10a6a176c" exitCode=0 Mar 21 10:12:03 crc kubenswrapper[4932]: I0321 10:12:03.062980 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568132-b7h7s" event={"ID":"58686ce9-e7df-457f-bb8b-90df9387e154","Type":"ContainerDied","Data":"cdf09de020add96e7074e2c014f6477ab4c74f8186463a1f2d81a2b10a6a176c"} Mar 21 10:12:03 crc kubenswrapper[4932]: I0321 10:12:03.703645 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:12:03 crc kubenswrapper[4932]: E0321 10:12:03.704720 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:12:04 crc kubenswrapper[4932]: I0321 10:12:04.541144 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568132-b7h7s" Mar 21 10:12:04 crc kubenswrapper[4932]: I0321 10:12:04.706944 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfwrp\" (UniqueName: \"kubernetes.io/projected/58686ce9-e7df-457f-bb8b-90df9387e154-kube-api-access-mfwrp\") pod \"58686ce9-e7df-457f-bb8b-90df9387e154\" (UID: \"58686ce9-e7df-457f-bb8b-90df9387e154\") " Mar 21 10:12:04 crc kubenswrapper[4932]: I0321 10:12:04.712033 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58686ce9-e7df-457f-bb8b-90df9387e154-kube-api-access-mfwrp" (OuterVolumeSpecName: "kube-api-access-mfwrp") pod "58686ce9-e7df-457f-bb8b-90df9387e154" (UID: "58686ce9-e7df-457f-bb8b-90df9387e154"). InnerVolumeSpecName "kube-api-access-mfwrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:12:04 crc kubenswrapper[4932]: I0321 10:12:04.810270 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfwrp\" (UniqueName: \"kubernetes.io/projected/58686ce9-e7df-457f-bb8b-90df9387e154-kube-api-access-mfwrp\") on node \"crc\" DevicePath \"\"" Mar 21 10:12:05 crc kubenswrapper[4932]: I0321 10:12:05.082580 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568132-b7h7s" event={"ID":"58686ce9-e7df-457f-bb8b-90df9387e154","Type":"ContainerDied","Data":"38e2685322b79c1cd900c99cc656bd4637f08cbdf573601c70e6acb9faa0a434"} Mar 21 10:12:05 crc kubenswrapper[4932]: I0321 10:12:05.082619 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e2685322b79c1cd900c99cc656bd4637f08cbdf573601c70e6acb9faa0a434" Mar 21 10:12:05 crc kubenswrapper[4932]: I0321 10:12:05.082670 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568132-b7h7s" Mar 21 10:12:05 crc kubenswrapper[4932]: I0321 10:12:05.603104 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568126-2f2n7"] Mar 21 10:12:05 crc kubenswrapper[4932]: I0321 10:12:05.613520 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568126-2f2n7"] Mar 21 10:12:05 crc kubenswrapper[4932]: I0321 10:12:05.713226 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5385fb18-9c86-4807-917b-ed05fbfcec54" path="/var/lib/kubelet/pods/5385fb18-9c86-4807-917b-ed05fbfcec54/volumes" Mar 21 10:12:06 crc kubenswrapper[4932]: I0321 10:12:06.703316 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:12:06 crc kubenswrapper[4932]: E0321 10:12:06.703697 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:12:08 crc kubenswrapper[4932]: I0321 10:12:08.703301 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:12:08 crc kubenswrapper[4932]: E0321 10:12:08.703884 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:12:14 crc kubenswrapper[4932]: I0321 10:12:14.702760 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:12:14 crc kubenswrapper[4932]: E0321 10:12:14.703641 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:12:18 crc kubenswrapper[4932]: I0321 10:12:18.702727 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:12:18 crc kubenswrapper[4932]: E0321 10:12:18.702960 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:12:23 crc kubenswrapper[4932]: I0321 10:12:23.702861 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:12:23 crc kubenswrapper[4932]: E0321 10:12:23.703681 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:12:27 crc kubenswrapper[4932]: I0321 10:12:27.713505 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:12:27 crc kubenswrapper[4932]: E0321 10:12:27.714725 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:12:29 crc kubenswrapper[4932]: I0321 10:12:29.702456 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:12:29 crc kubenswrapper[4932]: E0321 10:12:29.703018 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:12:33 crc kubenswrapper[4932]: I0321 10:12:33.762844 4932 scope.go:117] "RemoveContainer" containerID="5cbf2418369ceeff4136302cdebe7a2a5526d432eb06703c65139985ada0a0c4" Mar 21 10:12:35 crc kubenswrapper[4932]: I0321 10:12:35.703715 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:12:35 crc kubenswrapper[4932]: E0321 10:12:35.704778 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:12:39 crc kubenswrapper[4932]: I0321 10:12:39.702858 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:12:39 crc kubenswrapper[4932]: E0321 10:12:39.703935 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:12:43 crc kubenswrapper[4932]: I0321 10:12:43.702959 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:12:43 crc kubenswrapper[4932]: E0321 10:12:43.703737 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:12:50 crc kubenswrapper[4932]: I0321 10:12:50.703453 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:12:50 crc kubenswrapper[4932]: E0321 10:12:50.704384 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:12:53 crc kubenswrapper[4932]: I0321 10:12:53.702229 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:12:53 crc kubenswrapper[4932]: E0321 10:12:53.703155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:12:57 crc kubenswrapper[4932]: I0321 10:12:57.708902 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:12:57 crc kubenswrapper[4932]: E0321 10:12:57.709654 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:13:01 crc kubenswrapper[4932]: I0321 10:13:01.702557 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:13:01 crc kubenswrapper[4932]: E0321 10:13:01.703294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:13:07 crc kubenswrapper[4932]: I0321 10:13:07.708814 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:13:07 crc kubenswrapper[4932]: E0321 10:13:07.709537 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:13:11 crc kubenswrapper[4932]: I0321 10:13:11.703208 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:13:11 crc kubenswrapper[4932]: E0321 10:13:11.704486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:13:15 crc kubenswrapper[4932]: I0321 10:13:15.793687 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:13:15 crc kubenswrapper[4932]: E0321 10:13:15.794521 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:13:18 crc kubenswrapper[4932]: I0321 10:13:18.703089 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:13:19 crc kubenswrapper[4932]: I0321 10:13:19.849199 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a"} Mar 21 10:13:24 crc kubenswrapper[4932]: I0321 10:13:24.702682 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:13:24 crc kubenswrapper[4932]: E0321 10:13:24.703405 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:13:26 crc kubenswrapper[4932]: I0321 10:13:26.917640 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" exitCode=1 Mar 21 10:13:26 crc kubenswrapper[4932]: I0321 10:13:26.917715 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a"} Mar 21 10:13:26 crc kubenswrapper[4932]: I0321 10:13:26.917989 4932 scope.go:117] "RemoveContainer" containerID="6335786533a7a8f8931b65fc4b107945c1bf96b3f1fc45822420e80a46018489" Mar 21 10:13:26 crc kubenswrapper[4932]: I0321 10:13:26.918472 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:13:26 crc kubenswrapper[4932]: E0321 10:13:26.918677 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:13:27 crc kubenswrapper[4932]: I0321 10:13:27.740800 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:13:27 crc kubenswrapper[4932]: I0321 10:13:27.741110 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:13:27 crc kubenswrapper[4932]: I0321 10:13:27.741119 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:13:27 crc kubenswrapper[4932]: I0321 10:13:27.741129 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:13:27 crc kubenswrapper[4932]: I0321 10:13:27.931556 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:13:27 crc kubenswrapper[4932]: E0321 10:13:27.931858 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:13:30 crc kubenswrapper[4932]: I0321 10:13:30.703063 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:13:31 crc kubenswrapper[4932]: I0321 10:13:31.971812 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05"} Mar 21 10:13:37 crc kubenswrapper[4932]: I0321 10:13:37.948282 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:13:37 crc kubenswrapper[4932]: I0321 10:13:37.948890 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:13:38 crc kubenswrapper[4932]: I0321 10:13:38.703389 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:13:38 crc kubenswrapper[4932]: I0321 10:13:38.703750 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:13:38 crc kubenswrapper[4932]: E0321 10:13:38.703917 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:13:38 crc kubenswrapper[4932]: E0321 10:13:38.703990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:13:40 crc kubenswrapper[4932]: I0321 10:13:40.041330 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" exitCode=1 Mar 21 10:13:40 crc kubenswrapper[4932]: I0321 10:13:40.041406 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05"} Mar 21 10:13:40 crc kubenswrapper[4932]: I0321 10:13:40.041664 4932 scope.go:117] "RemoveContainer" containerID="20a6fe9f26920369ef32624e22fd5219dcf260b049e5fb6f16b1a8b6b7ec8d9d" Mar 21 10:13:40 crc kubenswrapper[4932]: I0321 10:13:40.042171 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:13:40 crc kubenswrapper[4932]: E0321 10:13:40.042399 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:13:47 crc kubenswrapper[4932]: I0321 10:13:47.948160 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:13:47 crc kubenswrapper[4932]: I0321 10:13:47.948745 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:13:47 crc kubenswrapper[4932]: I0321 10:13:47.949262 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:13:47 crc kubenswrapper[4932]: E0321 10:13:47.949531 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:13:53 crc kubenswrapper[4932]: I0321 10:13:53.712835 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:13:53 crc kubenswrapper[4932]: E0321 10:13:53.713681 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:13:53 crc kubenswrapper[4932]: I0321 10:13:53.713965 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:13:53 crc kubenswrapper[4932]: E0321 10:13:53.715585 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.140104 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568134-nv42z"] Mar 21 10:14:00 crc kubenswrapper[4932]: E0321 10:14:00.141002 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58686ce9-e7df-457f-bb8b-90df9387e154" containerName="oc" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.141021 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="58686ce9-e7df-457f-bb8b-90df9387e154" containerName="oc" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.141235 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="58686ce9-e7df-457f-bb8b-90df9387e154" containerName="oc" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.142079 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568134-nv42z" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.148150 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.149538 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.149716 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.152027 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568134-nv42z"] Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.190627 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g47vv\" (UniqueName: \"kubernetes.io/projected/0796a1db-2603-49e5-be3a-2aa0dcc792b5-kube-api-access-g47vv\") pod \"auto-csr-approver-29568134-nv42z\" (UID: \"0796a1db-2603-49e5-be3a-2aa0dcc792b5\") " pod="openshift-infra/auto-csr-approver-29568134-nv42z" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.292906 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g47vv\" (UniqueName: \"kubernetes.io/projected/0796a1db-2603-49e5-be3a-2aa0dcc792b5-kube-api-access-g47vv\") pod \"auto-csr-approver-29568134-nv42z\" (UID: \"0796a1db-2603-49e5-be3a-2aa0dcc792b5\") " pod="openshift-infra/auto-csr-approver-29568134-nv42z" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.313834 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g47vv\" (UniqueName: \"kubernetes.io/projected/0796a1db-2603-49e5-be3a-2aa0dcc792b5-kube-api-access-g47vv\") pod \"auto-csr-approver-29568134-nv42z\" (UID: \"0796a1db-2603-49e5-be3a-2aa0dcc792b5\") " pod="openshift-infra/auto-csr-approver-29568134-nv42z" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.463828 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568134-nv42z" Mar 21 10:14:00 crc kubenswrapper[4932]: I0321 10:14:00.903656 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568134-nv42z"] Mar 21 10:14:01 crc kubenswrapper[4932]: I0321 10:14:01.235499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568134-nv42z" event={"ID":"0796a1db-2603-49e5-be3a-2aa0dcc792b5","Type":"ContainerStarted","Data":"94be1422561496f2e063dab5e5dba707a22a8995ca6480f946b92da5429877a8"} Mar 21 10:14:02 crc kubenswrapper[4932]: I0321 10:14:02.246030 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568134-nv42z" event={"ID":"0796a1db-2603-49e5-be3a-2aa0dcc792b5","Type":"ContainerStarted","Data":"703df61816b6d6e36db52433aa7379d279ddfda7fd0a5fa95c42d1bd51d8a0fb"} Mar 21 10:14:02 crc kubenswrapper[4932]: I0321 10:14:02.263097 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568134-nv42z" podStartSLOduration=1.476261922 podStartE2EDuration="2.263075271s" podCreationTimestamp="2026-03-21 10:14:00 +0000 UTC" firstStartedPulling="2026-03-21 10:14:00.911071509 +0000 UTC m=+4544.506269778" lastFinishedPulling="2026-03-21 10:14:01.697884858 +0000 UTC m=+4545.293083127" observedRunningTime="2026-03-21 10:14:02.261712659 +0000 UTC m=+4545.856910938" watchObservedRunningTime="2026-03-21 10:14:02.263075271 +0000 UTC m=+4545.858273540" Mar 21 10:14:02 crc kubenswrapper[4932]: I0321 10:14:02.702676 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:14:02 crc kubenswrapper[4932]: E0321 10:14:02.702956 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:14:03 crc kubenswrapper[4932]: I0321 10:14:03.257026 4932 generic.go:334] "Generic (PLEG): container finished" podID="0796a1db-2603-49e5-be3a-2aa0dcc792b5" containerID="703df61816b6d6e36db52433aa7379d279ddfda7fd0a5fa95c42d1bd51d8a0fb" exitCode=0 Mar 21 10:14:03 crc kubenswrapper[4932]: I0321 10:14:03.257097 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568134-nv42z" event={"ID":"0796a1db-2603-49e5-be3a-2aa0dcc792b5","Type":"ContainerDied","Data":"703df61816b6d6e36db52433aa7379d279ddfda7fd0a5fa95c42d1bd51d8a0fb"} Mar 21 10:14:04 crc kubenswrapper[4932]: I0321 10:14:04.576166 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568134-nv42z" Mar 21 10:14:04 crc kubenswrapper[4932]: I0321 10:14:04.714524 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g47vv\" (UniqueName: \"kubernetes.io/projected/0796a1db-2603-49e5-be3a-2aa0dcc792b5-kube-api-access-g47vv\") pod \"0796a1db-2603-49e5-be3a-2aa0dcc792b5\" (UID: \"0796a1db-2603-49e5-be3a-2aa0dcc792b5\") " Mar 21 10:14:04 crc kubenswrapper[4932]: I0321 10:14:04.721320 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0796a1db-2603-49e5-be3a-2aa0dcc792b5-kube-api-access-g47vv" (OuterVolumeSpecName: "kube-api-access-g47vv") pod "0796a1db-2603-49e5-be3a-2aa0dcc792b5" (UID: "0796a1db-2603-49e5-be3a-2aa0dcc792b5"). InnerVolumeSpecName "kube-api-access-g47vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:14:04 crc kubenswrapper[4932]: I0321 10:14:04.817457 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g47vv\" (UniqueName: \"kubernetes.io/projected/0796a1db-2603-49e5-be3a-2aa0dcc792b5-kube-api-access-g47vv\") on node \"crc\" DevicePath \"\"" Mar 21 10:14:05 crc kubenswrapper[4932]: I0321 10:14:05.283681 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568134-nv42z" event={"ID":"0796a1db-2603-49e5-be3a-2aa0dcc792b5","Type":"ContainerDied","Data":"94be1422561496f2e063dab5e5dba707a22a8995ca6480f946b92da5429877a8"} Mar 21 10:14:05 crc kubenswrapper[4932]: I0321 10:14:05.283720 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94be1422561496f2e063dab5e5dba707a22a8995ca6480f946b92da5429877a8" Mar 21 10:14:05 crc kubenswrapper[4932]: I0321 10:14:05.283775 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568134-nv42z" Mar 21 10:14:05 crc kubenswrapper[4932]: I0321 10:14:05.326177 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568128-z5t4v"] Mar 21 10:14:05 crc kubenswrapper[4932]: I0321 10:14:05.333586 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568128-z5t4v"] Mar 21 10:14:05 crc kubenswrapper[4932]: I0321 10:14:05.702750 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:14:05 crc kubenswrapper[4932]: E0321 10:14:05.703071 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:14:05 crc kubenswrapper[4932]: I0321 10:14:05.714309 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011c0ddf-ad3f-456b-b41b-998be328c24e" path="/var/lib/kubelet/pods/011c0ddf-ad3f-456b-b41b-998be328c24e/volumes" Mar 21 10:14:07 crc kubenswrapper[4932]: I0321 10:14:07.708110 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:14:07 crc kubenswrapper[4932]: E0321 10:14:07.708660 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:14:16 crc kubenswrapper[4932]: I0321 10:14:16.703178 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:14:16 crc kubenswrapper[4932]: E0321 10:14:16.703958 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:14:17 crc kubenswrapper[4932]: I0321 10:14:17.709943 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:14:17 crc kubenswrapper[4932]: E0321 10:14:17.710513 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:14:19 crc kubenswrapper[4932]: I0321 10:14:19.702513 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:14:19 crc kubenswrapper[4932]: E0321 10:14:19.702909 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:14:29 crc kubenswrapper[4932]: I0321 10:14:29.703723 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:14:29 crc kubenswrapper[4932]: E0321 10:14:29.704538 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:14:31 crc kubenswrapper[4932]: I0321 10:14:31.702757 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:14:31 crc kubenswrapper[4932]: E0321 10:14:31.703302 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:14:33 crc kubenswrapper[4932]: I0321 10:14:33.856053 4932 scope.go:117] "RemoveContainer" containerID="c05059af076c932b0fae2bae7f72a5250f6788b9fa6c85fedbfb08682002c939" Mar 21 10:14:34 crc kubenswrapper[4932]: I0321 10:14:34.702622 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:14:34 crc kubenswrapper[4932]: E0321 10:14:34.703193 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:14:41 crc kubenswrapper[4932]: I0321 10:14:41.718249 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:14:41 crc kubenswrapper[4932]: E0321 10:14:41.719634 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:14:45 crc kubenswrapper[4932]: I0321 10:14:45.702716 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:14:45 crc kubenswrapper[4932]: E0321 10:14:45.703533 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:14:47 crc kubenswrapper[4932]: I0321 10:14:47.711213 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:14:47 crc kubenswrapper[4932]: E0321 10:14:47.711832 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:14:54 crc kubenswrapper[4932]: I0321 10:14:54.702222 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:14:54 crc kubenswrapper[4932]: E0321 10:14:54.702882 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:14:57 crc kubenswrapper[4932]: I0321 10:14:57.711185 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:14:57 crc kubenswrapper[4932]: E0321 10:14:57.712004 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.140137 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf"] Mar 21 10:15:00 crc kubenswrapper[4932]: E0321 10:15:00.140829 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0796a1db-2603-49e5-be3a-2aa0dcc792b5" containerName="oc" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.140842 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0796a1db-2603-49e5-be3a-2aa0dcc792b5" containerName="oc" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.141024 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0796a1db-2603-49e5-be3a-2aa0dcc792b5" containerName="oc" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.141709 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.143488 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.143613 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.150089 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf"] Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.276521 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08bc37c2-f56c-4dd7-bf45-918d4721c92e-secret-volume\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.276585 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08bc37c2-f56c-4dd7-bf45-918d4721c92e-config-volume\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.276635 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks5xq\" (UniqueName: \"kubernetes.io/projected/08bc37c2-f56c-4dd7-bf45-918d4721c92e-kube-api-access-ks5xq\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.378601 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08bc37c2-f56c-4dd7-bf45-918d4721c92e-secret-volume\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.378662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08bc37c2-f56c-4dd7-bf45-918d4721c92e-config-volume\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.378709 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks5xq\" (UniqueName: \"kubernetes.io/projected/08bc37c2-f56c-4dd7-bf45-918d4721c92e-kube-api-access-ks5xq\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.379981 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08bc37c2-f56c-4dd7-bf45-918d4721c92e-config-volume\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.386107 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08bc37c2-f56c-4dd7-bf45-918d4721c92e-secret-volume\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.395114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks5xq\" (UniqueName: \"kubernetes.io/projected/08bc37c2-f56c-4dd7-bf45-918d4721c92e-kube-api-access-ks5xq\") pod \"collect-profiles-29568135-rtkcf\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.462226 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:00 crc kubenswrapper[4932]: I0321 10:15:00.901214 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf"] Mar 21 10:15:01 crc kubenswrapper[4932]: I0321 10:15:01.746563 4932 generic.go:334] "Generic (PLEG): container finished" podID="08bc37c2-f56c-4dd7-bf45-918d4721c92e" containerID="7751e99491c6f5c3e08f7516cd85ad77680e824aad9064751f426f0989d82412" exitCode=0 Mar 21 10:15:01 crc kubenswrapper[4932]: I0321 10:15:01.746830 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" event={"ID":"08bc37c2-f56c-4dd7-bf45-918d4721c92e","Type":"ContainerDied","Data":"7751e99491c6f5c3e08f7516cd85ad77680e824aad9064751f426f0989d82412"} Mar 21 10:15:01 crc kubenswrapper[4932]: I0321 10:15:01.746856 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" event={"ID":"08bc37c2-f56c-4dd7-bf45-918d4721c92e","Type":"ContainerStarted","Data":"8063373ac49559bf2e6777016b9cd0cc8883c86a4c1268a1315600a03f5e2aad"} Mar 21 10:15:02 crc kubenswrapper[4932]: I0321 10:15:02.703043 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:15:02 crc kubenswrapper[4932]: E0321 10:15:02.703489 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.063014 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.232115 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks5xq\" (UniqueName: \"kubernetes.io/projected/08bc37c2-f56c-4dd7-bf45-918d4721c92e-kube-api-access-ks5xq\") pod \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.232381 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08bc37c2-f56c-4dd7-bf45-918d4721c92e-config-volume\") pod \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.232476 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08bc37c2-f56c-4dd7-bf45-918d4721c92e-secret-volume\") pod \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\" (UID: \"08bc37c2-f56c-4dd7-bf45-918d4721c92e\") " Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.233022 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08bc37c2-f56c-4dd7-bf45-918d4721c92e-config-volume" (OuterVolumeSpecName: "config-volume") pod "08bc37c2-f56c-4dd7-bf45-918d4721c92e" (UID: "08bc37c2-f56c-4dd7-bf45-918d4721c92e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.250131 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08bc37c2-f56c-4dd7-bf45-918d4721c92e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08bc37c2-f56c-4dd7-bf45-918d4721c92e" (UID: "08bc37c2-f56c-4dd7-bf45-918d4721c92e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.250143 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08bc37c2-f56c-4dd7-bf45-918d4721c92e-kube-api-access-ks5xq" (OuterVolumeSpecName: "kube-api-access-ks5xq") pod "08bc37c2-f56c-4dd7-bf45-918d4721c92e" (UID: "08bc37c2-f56c-4dd7-bf45-918d4721c92e"). InnerVolumeSpecName "kube-api-access-ks5xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.334948 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08bc37c2-f56c-4dd7-bf45-918d4721c92e-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.334979 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08bc37c2-f56c-4dd7-bf45-918d4721c92e-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.334989 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks5xq\" (UniqueName: \"kubernetes.io/projected/08bc37c2-f56c-4dd7-bf45-918d4721c92e-kube-api-access-ks5xq\") on node \"crc\" DevicePath \"\"" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.762835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" event={"ID":"08bc37c2-f56c-4dd7-bf45-918d4721c92e","Type":"ContainerDied","Data":"8063373ac49559bf2e6777016b9cd0cc8883c86a4c1268a1315600a03f5e2aad"} Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.762871 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8063373ac49559bf2e6777016b9cd0cc8883c86a4c1268a1315600a03f5e2aad" Mar 21 10:15:03 crc kubenswrapper[4932]: I0321 10:15:03.762923 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29568135-rtkcf" Mar 21 10:15:04 crc kubenswrapper[4932]: I0321 10:15:04.131116 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46"] Mar 21 10:15:04 crc kubenswrapper[4932]: I0321 10:15:04.140863 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29568090-rhn46"] Mar 21 10:15:05 crc kubenswrapper[4932]: I0321 10:15:05.702448 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:15:05 crc kubenswrapper[4932]: E0321 10:15:05.703566 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:15:05 crc kubenswrapper[4932]: I0321 10:15:05.715133 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c09d9da-3112-44ac-a0fb-bba652fc0b97" path="/var/lib/kubelet/pods/3c09d9da-3112-44ac-a0fb-bba652fc0b97/volumes" Mar 21 10:15:10 crc kubenswrapper[4932]: I0321 10:15:10.702819 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:15:10 crc kubenswrapper[4932]: E0321 10:15:10.703629 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:15:16 crc kubenswrapper[4932]: I0321 10:15:16.702550 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:15:16 crc kubenswrapper[4932]: E0321 10:15:16.703314 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:15:17 crc kubenswrapper[4932]: I0321 10:15:17.709016 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:15:17 crc kubenswrapper[4932]: E0321 10:15:17.709320 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:15:21 crc kubenswrapper[4932]: I0321 10:15:21.704164 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:15:21 crc kubenswrapper[4932]: E0321 10:15:21.705385 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:15:27 crc kubenswrapper[4932]: I0321 10:15:27.709217 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:15:27 crc kubenswrapper[4932]: E0321 10:15:27.709847 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:15:32 crc kubenswrapper[4932]: I0321 10:15:32.702288 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:15:32 crc kubenswrapper[4932]: E0321 10:15:32.703009 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:15:34 crc kubenswrapper[4932]: I0321 10:15:34.095776 4932 scope.go:117] "RemoveContainer" containerID="f21dd9afa340f895ee4717f693e953103e36479394e8f155a4dc65c3c4f65ba3" Mar 21 10:15:34 crc kubenswrapper[4932]: I0321 10:15:34.702870 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:15:34 crc kubenswrapper[4932]: E0321 10:15:34.703385 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:15:38 crc kubenswrapper[4932]: I0321 10:15:38.702820 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:15:38 crc kubenswrapper[4932]: E0321 10:15:38.703536 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:15:44 crc kubenswrapper[4932]: I0321 10:15:44.703976 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:15:44 crc kubenswrapper[4932]: E0321 10:15:44.704904 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:15:46 crc kubenswrapper[4932]: I0321 10:15:46.703575 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:15:46 crc kubenswrapper[4932]: E0321 10:15:46.704621 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:15:51 crc kubenswrapper[4932]: I0321 10:15:51.703285 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:15:51 crc kubenswrapper[4932]: E0321 10:15:51.704775 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:15:55 crc kubenswrapper[4932]: I0321 10:15:55.703718 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:15:55 crc kubenswrapper[4932]: E0321 10:15:55.704407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:15:57 crc kubenswrapper[4932]: I0321 10:15:57.709882 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:15:57 crc kubenswrapper[4932]: E0321 10:15:57.710617 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.161034 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568136-bk5bf"] Mar 21 10:16:00 crc kubenswrapper[4932]: E0321 10:16:00.161716 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08bc37c2-f56c-4dd7-bf45-918d4721c92e" containerName="collect-profiles" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.161730 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="08bc37c2-f56c-4dd7-bf45-918d4721c92e" containerName="collect-profiles" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.161919 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="08bc37c2-f56c-4dd7-bf45-918d4721c92e" containerName="collect-profiles" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.162596 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568136-bk5bf" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.169402 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.169581 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.172547 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.178603 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568136-bk5bf"] Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.317095 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm29p\" (UniqueName: \"kubernetes.io/projected/6e75f1d8-dcbe-4800-8400-02267ec183d3-kube-api-access-qm29p\") pod \"auto-csr-approver-29568136-bk5bf\" (UID: \"6e75f1d8-dcbe-4800-8400-02267ec183d3\") " pod="openshift-infra/auto-csr-approver-29568136-bk5bf" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.419241 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm29p\" (UniqueName: \"kubernetes.io/projected/6e75f1d8-dcbe-4800-8400-02267ec183d3-kube-api-access-qm29p\") pod \"auto-csr-approver-29568136-bk5bf\" (UID: \"6e75f1d8-dcbe-4800-8400-02267ec183d3\") " pod="openshift-infra/auto-csr-approver-29568136-bk5bf" Mar 21 10:16:00 crc kubenswrapper[4932]: I0321 10:16:00.929882 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm29p\" (UniqueName: \"kubernetes.io/projected/6e75f1d8-dcbe-4800-8400-02267ec183d3-kube-api-access-qm29p\") pod \"auto-csr-approver-29568136-bk5bf\" (UID: \"6e75f1d8-dcbe-4800-8400-02267ec183d3\") " pod="openshift-infra/auto-csr-approver-29568136-bk5bf" Mar 21 10:16:01 crc kubenswrapper[4932]: I0321 10:16:01.080388 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568136-bk5bf" Mar 21 10:16:01 crc kubenswrapper[4932]: I0321 10:16:01.511408 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568136-bk5bf"] Mar 21 10:16:02 crc kubenswrapper[4932]: I0321 10:16:02.260188 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568136-bk5bf" event={"ID":"6e75f1d8-dcbe-4800-8400-02267ec183d3","Type":"ContainerStarted","Data":"ac5532515a0237af2ac0934c219244c535e8825a5ec3d92bfe507c517e6660a2"} Mar 21 10:16:03 crc kubenswrapper[4932]: I0321 10:16:03.270698 4932 generic.go:334] "Generic (PLEG): container finished" podID="6e75f1d8-dcbe-4800-8400-02267ec183d3" containerID="7ea534fc43e058478d7d2c0092c395c3bba32e6ce3001790806da9cec6a7128d" exitCode=0 Mar 21 10:16:03 crc kubenswrapper[4932]: I0321 10:16:03.270823 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568136-bk5bf" event={"ID":"6e75f1d8-dcbe-4800-8400-02267ec183d3","Type":"ContainerDied","Data":"7ea534fc43e058478d7d2c0092c395c3bba32e6ce3001790806da9cec6a7128d"} Mar 21 10:16:04 crc kubenswrapper[4932]: I0321 10:16:04.655561 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568136-bk5bf" Mar 21 10:16:04 crc kubenswrapper[4932]: I0321 10:16:04.825910 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm29p\" (UniqueName: \"kubernetes.io/projected/6e75f1d8-dcbe-4800-8400-02267ec183d3-kube-api-access-qm29p\") pod \"6e75f1d8-dcbe-4800-8400-02267ec183d3\" (UID: \"6e75f1d8-dcbe-4800-8400-02267ec183d3\") " Mar 21 10:16:04 crc kubenswrapper[4932]: I0321 10:16:04.832110 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e75f1d8-dcbe-4800-8400-02267ec183d3-kube-api-access-qm29p" (OuterVolumeSpecName: "kube-api-access-qm29p") pod "6e75f1d8-dcbe-4800-8400-02267ec183d3" (UID: "6e75f1d8-dcbe-4800-8400-02267ec183d3"). InnerVolumeSpecName "kube-api-access-qm29p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:16:04 crc kubenswrapper[4932]: I0321 10:16:04.928570 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm29p\" (UniqueName: \"kubernetes.io/projected/6e75f1d8-dcbe-4800-8400-02267ec183d3-kube-api-access-qm29p\") on node \"crc\" DevicePath \"\"" Mar 21 10:16:05 crc kubenswrapper[4932]: I0321 10:16:05.289788 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568136-bk5bf" event={"ID":"6e75f1d8-dcbe-4800-8400-02267ec183d3","Type":"ContainerDied","Data":"ac5532515a0237af2ac0934c219244c535e8825a5ec3d92bfe507c517e6660a2"} Mar 21 10:16:05 crc kubenswrapper[4932]: I0321 10:16:05.289831 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac5532515a0237af2ac0934c219244c535e8825a5ec3d92bfe507c517e6660a2" Mar 21 10:16:05 crc kubenswrapper[4932]: I0321 10:16:05.289833 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568136-bk5bf" Mar 21 10:16:05 crc kubenswrapper[4932]: I0321 10:16:05.738851 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568130-m8jtm"] Mar 21 10:16:05 crc kubenswrapper[4932]: I0321 10:16:05.745991 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568130-m8jtm"] Mar 21 10:16:06 crc kubenswrapper[4932]: I0321 10:16:06.702857 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:16:06 crc kubenswrapper[4932]: E0321 10:16:06.703715 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:16:07 crc kubenswrapper[4932]: I0321 10:16:07.713442 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71158171-5d75-42eb-8d7d-b9a87fa84866" path="/var/lib/kubelet/pods/71158171-5d75-42eb-8d7d-b9a87fa84866/volumes" Mar 21 10:16:09 crc kubenswrapper[4932]: I0321 10:16:09.702687 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:16:09 crc kubenswrapper[4932]: E0321 10:16:09.703205 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:16:12 crc kubenswrapper[4932]: I0321 10:16:12.702973 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:16:12 crc kubenswrapper[4932]: E0321 10:16:12.703434 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:16:17 crc kubenswrapper[4932]: I0321 10:16:17.709986 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:16:17 crc kubenswrapper[4932]: E0321 10:16:17.711043 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:16:20 crc kubenswrapper[4932]: I0321 10:16:20.702322 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:16:20 crc kubenswrapper[4932]: E0321 10:16:20.702880 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:16:27 crc kubenswrapper[4932]: I0321 10:16:27.709489 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:16:27 crc kubenswrapper[4932]: E0321 10:16:27.710235 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:16:28 crc kubenswrapper[4932]: I0321 10:16:28.703557 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:16:28 crc kubenswrapper[4932]: E0321 10:16:28.704208 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:16:34 crc kubenswrapper[4932]: I0321 10:16:34.161584 4932 scope.go:117] "RemoveContainer" containerID="ca6cbc6b3f151fee5ffe8372f501df69d0910792643165002ca8b5e1dcf47942" Mar 21 10:16:35 crc kubenswrapper[4932]: I0321 10:16:35.703392 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:16:35 crc kubenswrapper[4932]: E0321 10:16:35.703993 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:16:42 crc kubenswrapper[4932]: I0321 10:16:42.703111 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:16:42 crc kubenswrapper[4932]: E0321 10:16:42.704322 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:16:43 crc kubenswrapper[4932]: I0321 10:16:43.702946 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:16:44 crc kubenswrapper[4932]: I0321 10:16:44.636493 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"8fec860383151efbde4fb71b965ba9220d9ab3cfffbe09f8942e6ba8572764e4"} Mar 21 10:16:46 crc kubenswrapper[4932]: I0321 10:16:46.702166 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:16:46 crc kubenswrapper[4932]: E0321 10:16:46.702715 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:16:57 crc kubenswrapper[4932]: I0321 10:16:57.708178 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:16:57 crc kubenswrapper[4932]: E0321 10:16:57.708835 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:17:01 crc kubenswrapper[4932]: I0321 10:17:01.703217 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:17:01 crc kubenswrapper[4932]: E0321 10:17:01.703980 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:17:12 crc kubenswrapper[4932]: I0321 10:17:12.702894 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:17:12 crc kubenswrapper[4932]: E0321 10:17:12.703601 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:17:16 crc kubenswrapper[4932]: I0321 10:17:16.702995 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:17:16 crc kubenswrapper[4932]: E0321 10:17:16.703676 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:17:24 crc kubenswrapper[4932]: I0321 10:17:24.702692 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:17:24 crc kubenswrapper[4932]: E0321 10:17:24.703517 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:17:28 crc kubenswrapper[4932]: I0321 10:17:28.702195 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:17:28 crc kubenswrapper[4932]: E0321 10:17:28.703054 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:17:38 crc kubenswrapper[4932]: I0321 10:17:38.703468 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:17:38 crc kubenswrapper[4932]: E0321 10:17:38.704278 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:17:39 crc kubenswrapper[4932]: I0321 10:17:39.702422 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:17:39 crc kubenswrapper[4932]: E0321 10:17:39.702692 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.722740 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mpdqp/must-gather-5t2qn"] Mar 21 10:17:43 crc kubenswrapper[4932]: E0321 10:17:43.723686 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e75f1d8-dcbe-4800-8400-02267ec183d3" containerName="oc" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.723704 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e75f1d8-dcbe-4800-8400-02267ec183d3" containerName="oc" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.724005 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e75f1d8-dcbe-4800-8400-02267ec183d3" containerName="oc" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.725787 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.732291 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mpdqp/must-gather-5t2qn"] Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.736983 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mpdqp"/"kube-root-ca.crt" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.741179 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mpdqp"/"openshift-service-ca.crt" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.890506 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9v8\" (UniqueName: \"kubernetes.io/projected/0f2bc9c7-c804-4108-a92e-f35274d0da17-kube-api-access-df9v8\") pod \"must-gather-5t2qn\" (UID: \"0f2bc9c7-c804-4108-a92e-f35274d0da17\") " pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.890596 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f2bc9c7-c804-4108-a92e-f35274d0da17-must-gather-output\") pod \"must-gather-5t2qn\" (UID: \"0f2bc9c7-c804-4108-a92e-f35274d0da17\") " pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.992834 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df9v8\" (UniqueName: \"kubernetes.io/projected/0f2bc9c7-c804-4108-a92e-f35274d0da17-kube-api-access-df9v8\") pod \"must-gather-5t2qn\" (UID: \"0f2bc9c7-c804-4108-a92e-f35274d0da17\") " pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.992940 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f2bc9c7-c804-4108-a92e-f35274d0da17-must-gather-output\") pod \"must-gather-5t2qn\" (UID: \"0f2bc9c7-c804-4108-a92e-f35274d0da17\") " pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:17:43 crc kubenswrapper[4932]: I0321 10:17:43.993670 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f2bc9c7-c804-4108-a92e-f35274d0da17-must-gather-output\") pod \"must-gather-5t2qn\" (UID: \"0f2bc9c7-c804-4108-a92e-f35274d0da17\") " pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:17:44 crc kubenswrapper[4932]: I0321 10:17:44.012000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9v8\" (UniqueName: \"kubernetes.io/projected/0f2bc9c7-c804-4108-a92e-f35274d0da17-kube-api-access-df9v8\") pod \"must-gather-5t2qn\" (UID: \"0f2bc9c7-c804-4108-a92e-f35274d0da17\") " pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:17:44 crc kubenswrapper[4932]: I0321 10:17:44.057757 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:17:44 crc kubenswrapper[4932]: I0321 10:17:44.493027 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mpdqp/must-gather-5t2qn"] Mar 21 10:17:44 crc kubenswrapper[4932]: I0321 10:17:44.500980 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 10:17:45 crc kubenswrapper[4932]: I0321 10:17:45.137499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" event={"ID":"0f2bc9c7-c804-4108-a92e-f35274d0da17","Type":"ContainerStarted","Data":"29abcb48b7a258bfa16cf90c66a9e18bdb93b7cbe37fa6eefe0e297994fd5534"} Mar 21 10:17:51 crc kubenswrapper[4932]: I0321 10:17:51.185594 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" event={"ID":"0f2bc9c7-c804-4108-a92e-f35274d0da17","Type":"ContainerStarted","Data":"abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81"} Mar 21 10:17:51 crc kubenswrapper[4932]: I0321 10:17:51.186138 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" event={"ID":"0f2bc9c7-c804-4108-a92e-f35274d0da17","Type":"ContainerStarted","Data":"32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d"} Mar 21 10:17:51 crc kubenswrapper[4932]: I0321 10:17:51.203035 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" podStartSLOduration=2.5638170909999998 podStartE2EDuration="8.203012035s" podCreationTimestamp="2026-03-21 10:17:43 +0000 UTC" firstStartedPulling="2026-03-21 10:17:44.500895128 +0000 UTC m=+4768.096093397" lastFinishedPulling="2026-03-21 10:17:50.140090072 +0000 UTC m=+4773.735288341" observedRunningTime="2026-03-21 10:17:51.197935156 +0000 UTC m=+4774.793133425" watchObservedRunningTime="2026-03-21 10:17:51.203012035 +0000 UTC m=+4774.798210304" Mar 21 10:17:52 crc kubenswrapper[4932]: I0321 10:17:52.702943 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:17:52 crc kubenswrapper[4932]: E0321 10:17:52.703523 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:17:54 crc kubenswrapper[4932]: I0321 10:17:54.702822 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:17:54 crc kubenswrapper[4932]: E0321 10:17:54.703404 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.087841 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mpdqp/crc-debug-vtq6q"] Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.089423 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.091535 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mpdqp"/"default-dockercfg-5pzcm" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.232959 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-host\") pod \"crc-debug-vtq6q\" (UID: \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\") " pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.233040 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6mjb\" (UniqueName: \"kubernetes.io/projected/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-kube-api-access-t6mjb\") pod \"crc-debug-vtq6q\" (UID: \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\") " pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.335813 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-host\") pod \"crc-debug-vtq6q\" (UID: \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\") " pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.335885 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6mjb\" (UniqueName: \"kubernetes.io/projected/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-kube-api-access-t6mjb\") pod \"crc-debug-vtq6q\" (UID: \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\") " pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.335969 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-host\") pod \"crc-debug-vtq6q\" (UID: \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\") " pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.356164 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6mjb\" (UniqueName: \"kubernetes.io/projected/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-kube-api-access-t6mjb\") pod \"crc-debug-vtq6q\" (UID: \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\") " pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:17:55 crc kubenswrapper[4932]: I0321 10:17:55.410444 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:17:56 crc kubenswrapper[4932]: I0321 10:17:56.234739 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" event={"ID":"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab","Type":"ContainerStarted","Data":"3e6d253c3209407df35f081c6ce77676bc1ee5769e6237b6495f5021620be80f"} Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.150250 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568138-dhdzm"] Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.152014 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568138-dhdzm" Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.154604 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.155261 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.155799 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.172149 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568138-dhdzm"] Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.239216 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pw66\" (UniqueName: \"kubernetes.io/projected/f2737adb-00c1-4d5e-8bc1-026b03a5ff75-kube-api-access-7pw66\") pod \"auto-csr-approver-29568138-dhdzm\" (UID: \"f2737adb-00c1-4d5e-8bc1-026b03a5ff75\") " pod="openshift-infra/auto-csr-approver-29568138-dhdzm" Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.340891 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pw66\" (UniqueName: \"kubernetes.io/projected/f2737adb-00c1-4d5e-8bc1-026b03a5ff75-kube-api-access-7pw66\") pod \"auto-csr-approver-29568138-dhdzm\" (UID: \"f2737adb-00c1-4d5e-8bc1-026b03a5ff75\") " pod="openshift-infra/auto-csr-approver-29568138-dhdzm" Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.361230 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pw66\" (UniqueName: \"kubernetes.io/projected/f2737adb-00c1-4d5e-8bc1-026b03a5ff75-kube-api-access-7pw66\") pod \"auto-csr-approver-29568138-dhdzm\" (UID: \"f2737adb-00c1-4d5e-8bc1-026b03a5ff75\") " pod="openshift-infra/auto-csr-approver-29568138-dhdzm" Mar 21 10:18:00 crc kubenswrapper[4932]: I0321 10:18:00.475427 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568138-dhdzm" Mar 21 10:18:05 crc kubenswrapper[4932]: I0321 10:18:05.703422 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:18:05 crc kubenswrapper[4932]: E0321 10:18:05.704166 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:18:06 crc kubenswrapper[4932]: I0321 10:18:06.360204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" event={"ID":"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab","Type":"ContainerStarted","Data":"8e62d0a0b6db3efc7a21df243e36a500dd76b1e60c9091524de9ccec3a0fdc9a"} Mar 21 10:18:06 crc kubenswrapper[4932]: I0321 10:18:06.378464 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" podStartSLOduration=0.755574934 podStartE2EDuration="11.378434588s" podCreationTimestamp="2026-03-21 10:17:55 +0000 UTC" firstStartedPulling="2026-03-21 10:17:55.517534814 +0000 UTC m=+4779.112733093" lastFinishedPulling="2026-03-21 10:18:06.140394478 +0000 UTC m=+4789.735592747" observedRunningTime="2026-03-21 10:18:06.377074126 +0000 UTC m=+4789.972272395" watchObservedRunningTime="2026-03-21 10:18:06.378434588 +0000 UTC m=+4789.973632857" Mar 21 10:18:06 crc kubenswrapper[4932]: I0321 10:18:06.448607 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568138-dhdzm"] Mar 21 10:18:06 crc kubenswrapper[4932]: W0321 10:18:06.451496 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2737adb_00c1_4d5e_8bc1_026b03a5ff75.slice/crio-25fd33b6633f05fcdcb0a632d8f643d759b65793a11b7f812fd4b75dbb75ffdb WatchSource:0}: Error finding container 25fd33b6633f05fcdcb0a632d8f643d759b65793a11b7f812fd4b75dbb75ffdb: Status 404 returned error can't find the container with id 25fd33b6633f05fcdcb0a632d8f643d759b65793a11b7f812fd4b75dbb75ffdb Mar 21 10:18:07 crc kubenswrapper[4932]: I0321 10:18:07.389996 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568138-dhdzm" event={"ID":"f2737adb-00c1-4d5e-8bc1-026b03a5ff75","Type":"ContainerStarted","Data":"25fd33b6633f05fcdcb0a632d8f643d759b65793a11b7f812fd4b75dbb75ffdb"} Mar 21 10:18:08 crc kubenswrapper[4932]: I0321 10:18:08.399556 4932 generic.go:334] "Generic (PLEG): container finished" podID="f2737adb-00c1-4d5e-8bc1-026b03a5ff75" containerID="535b37d3d2df11aa68875e3bc842a7df8dbb8e717a529e0c8c9c2f943c8a31e2" exitCode=0 Mar 21 10:18:08 crc kubenswrapper[4932]: I0321 10:18:08.399656 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568138-dhdzm" event={"ID":"f2737adb-00c1-4d5e-8bc1-026b03a5ff75","Type":"ContainerDied","Data":"535b37d3d2df11aa68875e3bc842a7df8dbb8e717a529e0c8c9c2f943c8a31e2"} Mar 21 10:18:09 crc kubenswrapper[4932]: I0321 10:18:09.702730 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:18:09 crc kubenswrapper[4932]: E0321 10:18:09.703284 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:18:09 crc kubenswrapper[4932]: I0321 10:18:09.738774 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568138-dhdzm" Mar 21 10:18:09 crc kubenswrapper[4932]: I0321 10:18:09.838919 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pw66\" (UniqueName: \"kubernetes.io/projected/f2737adb-00c1-4d5e-8bc1-026b03a5ff75-kube-api-access-7pw66\") pod \"f2737adb-00c1-4d5e-8bc1-026b03a5ff75\" (UID: \"f2737adb-00c1-4d5e-8bc1-026b03a5ff75\") " Mar 21 10:18:09 crc kubenswrapper[4932]: I0321 10:18:09.847180 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2737adb-00c1-4d5e-8bc1-026b03a5ff75-kube-api-access-7pw66" (OuterVolumeSpecName: "kube-api-access-7pw66") pod "f2737adb-00c1-4d5e-8bc1-026b03a5ff75" (UID: "f2737adb-00c1-4d5e-8bc1-026b03a5ff75"). InnerVolumeSpecName "kube-api-access-7pw66". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:18:09 crc kubenswrapper[4932]: I0321 10:18:09.941995 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pw66\" (UniqueName: \"kubernetes.io/projected/f2737adb-00c1-4d5e-8bc1-026b03a5ff75-kube-api-access-7pw66\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:10 crc kubenswrapper[4932]: I0321 10:18:10.428415 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568138-dhdzm" event={"ID":"f2737adb-00c1-4d5e-8bc1-026b03a5ff75","Type":"ContainerDied","Data":"25fd33b6633f05fcdcb0a632d8f643d759b65793a11b7f812fd4b75dbb75ffdb"} Mar 21 10:18:10 crc kubenswrapper[4932]: I0321 10:18:10.428455 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25fd33b6633f05fcdcb0a632d8f643d759b65793a11b7f812fd4b75dbb75ffdb" Mar 21 10:18:10 crc kubenswrapper[4932]: I0321 10:18:10.428486 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568138-dhdzm" Mar 21 10:18:10 crc kubenswrapper[4932]: I0321 10:18:10.811382 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568132-b7h7s"] Mar 21 10:18:10 crc kubenswrapper[4932]: I0321 10:18:10.820730 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568132-b7h7s"] Mar 21 10:18:11 crc kubenswrapper[4932]: I0321 10:18:11.718080 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58686ce9-e7df-457f-bb8b-90df9387e154" path="/var/lib/kubelet/pods/58686ce9-e7df-457f-bb8b-90df9387e154/volumes" Mar 21 10:18:17 crc kubenswrapper[4932]: I0321 10:18:17.056815 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:18:17 crc kubenswrapper[4932]: E0321 10:18:17.057783 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:18:22 crc kubenswrapper[4932]: I0321 10:18:22.702482 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:18:22 crc kubenswrapper[4932]: E0321 10:18:22.703183 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:18:24 crc kubenswrapper[4932]: I0321 10:18:24.559975 4932 generic.go:334] "Generic (PLEG): container finished" podID="0b85a9c0-f6eb-434e-9d87-23e792c8a4ab" containerID="8e62d0a0b6db3efc7a21df243e36a500dd76b1e60c9091524de9ccec3a0fdc9a" exitCode=0 Mar 21 10:18:24 crc kubenswrapper[4932]: I0321 10:18:24.560152 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" event={"ID":"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab","Type":"ContainerDied","Data":"8e62d0a0b6db3efc7a21df243e36a500dd76b1e60c9091524de9ccec3a0fdc9a"} Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.709435 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.746793 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mpdqp/crc-debug-vtq6q"] Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.758847 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mpdqp/crc-debug-vtq6q"] Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.857049 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6mjb\" (UniqueName: \"kubernetes.io/projected/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-kube-api-access-t6mjb\") pod \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\" (UID: \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\") " Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.857210 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-host\") pod \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\" (UID: \"0b85a9c0-f6eb-434e-9d87-23e792c8a4ab\") " Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.857559 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-host" (OuterVolumeSpecName: "host") pod "0b85a9c0-f6eb-434e-9d87-23e792c8a4ab" (UID: "0b85a9c0-f6eb-434e-9d87-23e792c8a4ab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.857813 4932 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-host\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.863873 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-kube-api-access-t6mjb" (OuterVolumeSpecName: "kube-api-access-t6mjb") pod "0b85a9c0-f6eb-434e-9d87-23e792c8a4ab" (UID: "0b85a9c0-f6eb-434e-9d87-23e792c8a4ab"). InnerVolumeSpecName "kube-api-access-t6mjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:18:25 crc kubenswrapper[4932]: I0321 10:18:25.959476 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6mjb\" (UniqueName: \"kubernetes.io/projected/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab-kube-api-access-t6mjb\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.580370 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e6d253c3209407df35f081c6ce77676bc1ee5769e6237b6495f5021620be80f" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.580625 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/crc-debug-vtq6q" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.948941 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mpdqp/crc-debug-6mswz"] Mar 21 10:18:26 crc kubenswrapper[4932]: E0321 10:18:26.950617 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b85a9c0-f6eb-434e-9d87-23e792c8a4ab" containerName="container-00" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.950724 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b85a9c0-f6eb-434e-9d87-23e792c8a4ab" containerName="container-00" Mar 21 10:18:26 crc kubenswrapper[4932]: E0321 10:18:26.950852 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2737adb-00c1-4d5e-8bc1-026b03a5ff75" containerName="oc" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.950935 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2737adb-00c1-4d5e-8bc1-026b03a5ff75" containerName="oc" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.951253 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b85a9c0-f6eb-434e-9d87-23e792c8a4ab" containerName="container-00" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.951941 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2737adb-00c1-4d5e-8bc1-026b03a5ff75" containerName="oc" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.954487 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:26 crc kubenswrapper[4932]: I0321 10:18:26.957424 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mpdqp"/"default-dockercfg-5pzcm" Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.082333 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8tsg\" (UniqueName: \"kubernetes.io/projected/e59b4929-66ec-4de3-a6f5-8a185ede7e59-kube-api-access-r8tsg\") pod \"crc-debug-6mswz\" (UID: \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\") " pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.082517 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59b4929-66ec-4de3-a6f5-8a185ede7e59-host\") pod \"crc-debug-6mswz\" (UID: \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\") " pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.184763 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8tsg\" (UniqueName: \"kubernetes.io/projected/e59b4929-66ec-4de3-a6f5-8a185ede7e59-kube-api-access-r8tsg\") pod \"crc-debug-6mswz\" (UID: \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\") " pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.184920 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59b4929-66ec-4de3-a6f5-8a185ede7e59-host\") pod \"crc-debug-6mswz\" (UID: \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\") " pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.185052 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59b4929-66ec-4de3-a6f5-8a185ede7e59-host\") pod \"crc-debug-6mswz\" (UID: \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\") " pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.203316 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8tsg\" (UniqueName: \"kubernetes.io/projected/e59b4929-66ec-4de3-a6f5-8a185ede7e59-kube-api-access-r8tsg\") pod \"crc-debug-6mswz\" (UID: \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\") " pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.272370 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:27 crc kubenswrapper[4932]: W0321 10:18:27.311503 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59b4929_66ec_4de3_a6f5_8a185ede7e59.slice/crio-a9b47543c6b8de34ef96c495e91cdc9c3acec94b8ddd014292e4ae930e43ddf7 WatchSource:0}: Error finding container a9b47543c6b8de34ef96c495e91cdc9c3acec94b8ddd014292e4ae930e43ddf7: Status 404 returned error can't find the container with id a9b47543c6b8de34ef96c495e91cdc9c3acec94b8ddd014292e4ae930e43ddf7 Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.594180 4932 generic.go:334] "Generic (PLEG): container finished" podID="e59b4929-66ec-4de3-a6f5-8a185ede7e59" containerID="46c7aaa2f1708ed24bc32676d63c54b28d7dd0bcc96d021a089dda4daecdc96c" exitCode=1 Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.594261 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/crc-debug-6mswz" event={"ID":"e59b4929-66ec-4de3-a6f5-8a185ede7e59","Type":"ContainerDied","Data":"46c7aaa2f1708ed24bc32676d63c54b28d7dd0bcc96d021a089dda4daecdc96c"} Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.594640 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/crc-debug-6mswz" event={"ID":"e59b4929-66ec-4de3-a6f5-8a185ede7e59","Type":"ContainerStarted","Data":"a9b47543c6b8de34ef96c495e91cdc9c3acec94b8ddd014292e4ae930e43ddf7"} Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.636867 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mpdqp/crc-debug-6mswz"] Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.646368 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mpdqp/crc-debug-6mswz"] Mar 21 10:18:27 crc kubenswrapper[4932]: I0321 10:18:27.712073 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b85a9c0-f6eb-434e-9d87-23e792c8a4ab" path="/var/lib/kubelet/pods/0b85a9c0-f6eb-434e-9d87-23e792c8a4ab/volumes" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.038958 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k72qk"] Mar 21 10:18:28 crc kubenswrapper[4932]: E0321 10:18:28.039651 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b4929-66ec-4de3-a6f5-8a185ede7e59" containerName="container-00" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.039663 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b4929-66ec-4de3-a6f5-8a185ede7e59" containerName="container-00" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.039898 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b4929-66ec-4de3-a6f5-8a185ede7e59" containerName="container-00" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.041241 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.058920 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k72qk"] Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.103045 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-catalog-content\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.103104 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjv4p\" (UniqueName: \"kubernetes.io/projected/69713df1-f374-455e-8cb2-eef32394d3ca-kube-api-access-vjv4p\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.103238 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-utilities\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.204687 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-catalog-content\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.204782 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjv4p\" (UniqueName: \"kubernetes.io/projected/69713df1-f374-455e-8cb2-eef32394d3ca-kube-api-access-vjv4p\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.204895 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-utilities\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.205500 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-utilities\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.205784 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-catalog-content\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.228495 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjv4p\" (UniqueName: \"kubernetes.io/projected/69713df1-f374-455e-8cb2-eef32394d3ca-kube-api-access-vjv4p\") pod \"community-operators-k72qk\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.362329 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.710570 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.827549 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8tsg\" (UniqueName: \"kubernetes.io/projected/e59b4929-66ec-4de3-a6f5-8a185ede7e59-kube-api-access-r8tsg\") pod \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\" (UID: \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\") " Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.827729 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59b4929-66ec-4de3-a6f5-8a185ede7e59-host\") pod \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\" (UID: \"e59b4929-66ec-4de3-a6f5-8a185ede7e59\") " Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.827865 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e59b4929-66ec-4de3-a6f5-8a185ede7e59-host" (OuterVolumeSpecName: "host") pod "e59b4929-66ec-4de3-a6f5-8a185ede7e59" (UID: "e59b4929-66ec-4de3-a6f5-8a185ede7e59"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.828187 4932 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e59b4929-66ec-4de3-a6f5-8a185ede7e59-host\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.832796 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59b4929-66ec-4de3-a6f5-8a185ede7e59-kube-api-access-r8tsg" (OuterVolumeSpecName: "kube-api-access-r8tsg") pod "e59b4929-66ec-4de3-a6f5-8a185ede7e59" (UID: "e59b4929-66ec-4de3-a6f5-8a185ede7e59"). InnerVolumeSpecName "kube-api-access-r8tsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.904002 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k72qk"] Mar 21 10:18:28 crc kubenswrapper[4932]: W0321 10:18:28.905763 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69713df1_f374_455e_8cb2_eef32394d3ca.slice/crio-b1a4f175894e42b517ad42a87390a8e429b9be269422bcf2e333b86569f80bcc WatchSource:0}: Error finding container b1a4f175894e42b517ad42a87390a8e429b9be269422bcf2e333b86569f80bcc: Status 404 returned error can't find the container with id b1a4f175894e42b517ad42a87390a8e429b9be269422bcf2e333b86569f80bcc Mar 21 10:18:28 crc kubenswrapper[4932]: I0321 10:18:28.930518 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8tsg\" (UniqueName: \"kubernetes.io/projected/e59b4929-66ec-4de3-a6f5-8a185ede7e59-kube-api-access-r8tsg\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:29 crc kubenswrapper[4932]: I0321 10:18:29.620064 4932 generic.go:334] "Generic (PLEG): container finished" podID="69713df1-f374-455e-8cb2-eef32394d3ca" containerID="5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47" exitCode=0 Mar 21 10:18:29 crc kubenswrapper[4932]: I0321 10:18:29.620163 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k72qk" event={"ID":"69713df1-f374-455e-8cb2-eef32394d3ca","Type":"ContainerDied","Data":"5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47"} Mar 21 10:18:29 crc kubenswrapper[4932]: I0321 10:18:29.620441 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k72qk" event={"ID":"69713df1-f374-455e-8cb2-eef32394d3ca","Type":"ContainerStarted","Data":"b1a4f175894e42b517ad42a87390a8e429b9be269422bcf2e333b86569f80bcc"} Mar 21 10:18:29 crc kubenswrapper[4932]: I0321 10:18:29.623761 4932 scope.go:117] "RemoveContainer" containerID="46c7aaa2f1708ed24bc32676d63c54b28d7dd0bcc96d021a089dda4daecdc96c" Mar 21 10:18:29 crc kubenswrapper[4932]: I0321 10:18:29.623812 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/crc-debug-6mswz" Mar 21 10:18:29 crc kubenswrapper[4932]: I0321 10:18:29.713239 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59b4929-66ec-4de3-a6f5-8a185ede7e59" path="/var/lib/kubelet/pods/e59b4929-66ec-4de3-a6f5-8a185ede7e59/volumes" Mar 21 10:18:30 crc kubenswrapper[4932]: I0321 10:18:30.639126 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k72qk" event={"ID":"69713df1-f374-455e-8cb2-eef32394d3ca","Type":"ContainerStarted","Data":"82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a"} Mar 21 10:18:31 crc kubenswrapper[4932]: I0321 10:18:31.653023 4932 generic.go:334] "Generic (PLEG): container finished" podID="69713df1-f374-455e-8cb2-eef32394d3ca" containerID="82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a" exitCode=0 Mar 21 10:18:31 crc kubenswrapper[4932]: I0321 10:18:31.653064 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k72qk" event={"ID":"69713df1-f374-455e-8cb2-eef32394d3ca","Type":"ContainerDied","Data":"82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a"} Mar 21 10:18:31 crc kubenswrapper[4932]: I0321 10:18:31.702134 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:18:32 crc kubenswrapper[4932]: I0321 10:18:32.676096 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66"} Mar 21 10:18:32 crc kubenswrapper[4932]: I0321 10:18:32.679544 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k72qk" event={"ID":"69713df1-f374-455e-8cb2-eef32394d3ca","Type":"ContainerStarted","Data":"a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484"} Mar 21 10:18:32 crc kubenswrapper[4932]: I0321 10:18:32.724033 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k72qk" podStartSLOduration=2.300399231 podStartE2EDuration="4.724016058s" podCreationTimestamp="2026-03-21 10:18:28 +0000 UTC" firstStartedPulling="2026-03-21 10:18:29.623181087 +0000 UTC m=+4813.218379356" lastFinishedPulling="2026-03-21 10:18:32.046797914 +0000 UTC m=+4815.641996183" observedRunningTime="2026-03-21 10:18:32.717562177 +0000 UTC m=+4816.312760446" watchObservedRunningTime="2026-03-21 10:18:32.724016058 +0000 UTC m=+4816.319214327" Mar 21 10:18:34 crc kubenswrapper[4932]: I0321 10:18:34.264149 4932 scope.go:117] "RemoveContainer" containerID="cdf09de020add96e7074e2c014f6477ab4c74f8186463a1f2d81a2b10a6a176c" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.422145 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-54s68"] Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.424502 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.432658 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54s68"] Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.570304 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-utilities\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.570619 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-catalog-content\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.570779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpg8j\" (UniqueName: \"kubernetes.io/projected/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-kube-api-access-zpg8j\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.672805 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-utilities\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.672875 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-catalog-content\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.672933 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpg8j\" (UniqueName: \"kubernetes.io/projected/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-kube-api-access-zpg8j\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.673555 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-utilities\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.673836 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-catalog-content\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.700443 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpg8j\" (UniqueName: \"kubernetes.io/projected/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-kube-api-access-zpg8j\") pod \"redhat-operators-54s68\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:35 crc kubenswrapper[4932]: I0321 10:18:35.742964 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:36 crc kubenswrapper[4932]: I0321 10:18:36.316720 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54s68"] Mar 21 10:18:36 crc kubenswrapper[4932]: W0321 10:18:36.319803 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9712f8be_ceb5_44ae_aa65_5ab66fbff3e4.slice/crio-fe4283bd2dee29071894523d84adc8c9c4d6ccaf57617cf4874b2ab701f7a6d3 WatchSource:0}: Error finding container fe4283bd2dee29071894523d84adc8c9c4d6ccaf57617cf4874b2ab701f7a6d3: Status 404 returned error can't find the container with id fe4283bd2dee29071894523d84adc8c9c4d6ccaf57617cf4874b2ab701f7a6d3 Mar 21 10:18:36 crc kubenswrapper[4932]: I0321 10:18:36.702521 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:18:36 crc kubenswrapper[4932]: E0321 10:18:36.702962 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:18:36 crc kubenswrapper[4932]: I0321 10:18:36.721423 4932 generic.go:334] "Generic (PLEG): container finished" podID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerID="96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3" exitCode=0 Mar 21 10:18:36 crc kubenswrapper[4932]: I0321 10:18:36.721496 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54s68" event={"ID":"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4","Type":"ContainerDied","Data":"96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3"} Mar 21 10:18:36 crc kubenswrapper[4932]: I0321 10:18:36.721551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54s68" event={"ID":"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4","Type":"ContainerStarted","Data":"fe4283bd2dee29071894523d84adc8c9c4d6ccaf57617cf4874b2ab701f7a6d3"} Mar 21 10:18:37 crc kubenswrapper[4932]: I0321 10:18:37.732106 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54s68" event={"ID":"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4","Type":"ContainerStarted","Data":"7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4"} Mar 21 10:18:37 crc kubenswrapper[4932]: I0321 10:18:37.740977 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:18:37 crc kubenswrapper[4932]: I0321 10:18:37.741149 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:18:38 crc kubenswrapper[4932]: I0321 10:18:38.363401 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:38 crc kubenswrapper[4932]: I0321 10:18:38.363634 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:39 crc kubenswrapper[4932]: I0321 10:18:39.413842 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k72qk" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="registry-server" probeResult="failure" output=< Mar 21 10:18:39 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 10:18:39 crc kubenswrapper[4932]: > Mar 21 10:18:40 crc kubenswrapper[4932]: E0321 10:18:40.256224 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9712f8be_ceb5_44ae_aa65_5ab66fbff3e4.slice/crio-conmon-7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4.scope\": RecentStats: unable to find data in memory cache]" Mar 21 10:18:40 crc kubenswrapper[4932]: I0321 10:18:40.764792 4932 generic.go:334] "Generic (PLEG): container finished" podID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerID="7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4" exitCode=0 Mar 21 10:18:40 crc kubenswrapper[4932]: I0321 10:18:40.764873 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54s68" event={"ID":"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4","Type":"ContainerDied","Data":"7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4"} Mar 21 10:18:40 crc kubenswrapper[4932]: I0321 10:18:40.767788 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" exitCode=1 Mar 21 10:18:40 crc kubenswrapper[4932]: I0321 10:18:40.767833 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66"} Mar 21 10:18:40 crc kubenswrapper[4932]: I0321 10:18:40.767867 4932 scope.go:117] "RemoveContainer" containerID="45c65af812a823e51585ddcc89e46298075ad59ee47c707e49ac873aacf3730a" Mar 21 10:18:40 crc kubenswrapper[4932]: I0321 10:18:40.768373 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:18:40 crc kubenswrapper[4932]: E0321 10:18:40.768698 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:18:41 crc kubenswrapper[4932]: I0321 10:18:41.780917 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54s68" event={"ID":"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4","Type":"ContainerStarted","Data":"85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef"} Mar 21 10:18:41 crc kubenswrapper[4932]: I0321 10:18:41.800103 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-54s68" podStartSLOduration=2.078671976 podStartE2EDuration="6.800083617s" podCreationTimestamp="2026-03-21 10:18:35 +0000 UTC" firstStartedPulling="2026-03-21 10:18:36.723128893 +0000 UTC m=+4820.318327162" lastFinishedPulling="2026-03-21 10:18:41.444540534 +0000 UTC m=+4825.039738803" observedRunningTime="2026-03-21 10:18:41.798011011 +0000 UTC m=+4825.393209290" watchObservedRunningTime="2026-03-21 10:18:41.800083617 +0000 UTC m=+4825.395281886" Mar 21 10:18:45 crc kubenswrapper[4932]: I0321 10:18:45.743749 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:45 crc kubenswrapper[4932]: I0321 10:18:45.744258 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:46 crc kubenswrapper[4932]: I0321 10:18:46.792288 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-54s68" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="registry-server" probeResult="failure" output=< Mar 21 10:18:46 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Mar 21 10:18:46 crc kubenswrapper[4932]: > Mar 21 10:18:47 crc kubenswrapper[4932]: I0321 10:18:47.740700 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:18:47 crc kubenswrapper[4932]: I0321 10:18:47.741513 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:18:47 crc kubenswrapper[4932]: E0321 10:18:47.741711 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:18:47 crc kubenswrapper[4932]: I0321 10:18:47.741865 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:18:47 crc kubenswrapper[4932]: I0321 10:18:47.830565 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:18:47 crc kubenswrapper[4932]: E0321 10:18:47.830928 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:18:48 crc kubenswrapper[4932]: I0321 10:18:48.426069 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:48 crc kubenswrapper[4932]: I0321 10:18:48.468183 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:48 crc kubenswrapper[4932]: I0321 10:18:48.661677 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k72qk"] Mar 21 10:18:48 crc kubenswrapper[4932]: I0321 10:18:48.703058 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:18:49 crc kubenswrapper[4932]: I0321 10:18:49.848398 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc"} Mar 21 10:18:49 crc kubenswrapper[4932]: I0321 10:18:49.848567 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k72qk" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="registry-server" containerID="cri-o://a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484" gracePeriod=2 Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.377602 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.483464 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-utilities\") pod \"69713df1-f374-455e-8cb2-eef32394d3ca\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.483549 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-catalog-content\") pod \"69713df1-f374-455e-8cb2-eef32394d3ca\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.483655 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjv4p\" (UniqueName: \"kubernetes.io/projected/69713df1-f374-455e-8cb2-eef32394d3ca-kube-api-access-vjv4p\") pod \"69713df1-f374-455e-8cb2-eef32394d3ca\" (UID: \"69713df1-f374-455e-8cb2-eef32394d3ca\") " Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.483987 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-utilities" (OuterVolumeSpecName: "utilities") pod "69713df1-f374-455e-8cb2-eef32394d3ca" (UID: "69713df1-f374-455e-8cb2-eef32394d3ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.484372 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.521375 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69713df1-f374-455e-8cb2-eef32394d3ca-kube-api-access-vjv4p" (OuterVolumeSpecName: "kube-api-access-vjv4p") pod "69713df1-f374-455e-8cb2-eef32394d3ca" (UID: "69713df1-f374-455e-8cb2-eef32394d3ca"). InnerVolumeSpecName "kube-api-access-vjv4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.539830 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69713df1-f374-455e-8cb2-eef32394d3ca" (UID: "69713df1-f374-455e-8cb2-eef32394d3ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.585570 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69713df1-f374-455e-8cb2-eef32394d3ca-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.585598 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjv4p\" (UniqueName: \"kubernetes.io/projected/69713df1-f374-455e-8cb2-eef32394d3ca-kube-api-access-vjv4p\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.859135 4932 generic.go:334] "Generic (PLEG): container finished" podID="69713df1-f374-455e-8cb2-eef32394d3ca" containerID="a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484" exitCode=0 Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.859180 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k72qk" event={"ID":"69713df1-f374-455e-8cb2-eef32394d3ca","Type":"ContainerDied","Data":"a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484"} Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.859198 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k72qk" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.859214 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k72qk" event={"ID":"69713df1-f374-455e-8cb2-eef32394d3ca","Type":"ContainerDied","Data":"b1a4f175894e42b517ad42a87390a8e429b9be269422bcf2e333b86569f80bcc"} Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.859280 4932 scope.go:117] "RemoveContainer" containerID="a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.886554 4932 scope.go:117] "RemoveContainer" containerID="82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.909567 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k72qk"] Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.916680 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k72qk"] Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.923750 4932 scope.go:117] "RemoveContainer" containerID="5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.978532 4932 scope.go:117] "RemoveContainer" containerID="a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484" Mar 21 10:18:50 crc kubenswrapper[4932]: E0321 10:18:50.979586 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484\": container with ID starting with a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484 not found: ID does not exist" containerID="a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.979631 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484"} err="failed to get container status \"a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484\": rpc error: code = NotFound desc = could not find container \"a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484\": container with ID starting with a794b04ca964abf8773ffa5da630b3378e7cf518893ffc867f5d57780b8eb484 not found: ID does not exist" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.979658 4932 scope.go:117] "RemoveContainer" containerID="82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a" Mar 21 10:18:50 crc kubenswrapper[4932]: E0321 10:18:50.980097 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a\": container with ID starting with 82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a not found: ID does not exist" containerID="82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.980142 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a"} err="failed to get container status \"82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a\": rpc error: code = NotFound desc = could not find container \"82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a\": container with ID starting with 82e39b64f90306af3c412f339c5e7e3990c5215d1e2835e64993501e17ffd17a not found: ID does not exist" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.980171 4932 scope.go:117] "RemoveContainer" containerID="5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47" Mar 21 10:18:50 crc kubenswrapper[4932]: E0321 10:18:50.980558 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47\": container with ID starting with 5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47 not found: ID does not exist" containerID="5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47" Mar 21 10:18:50 crc kubenswrapper[4932]: I0321 10:18:50.980587 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47"} err="failed to get container status \"5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47\": rpc error: code = NotFound desc = could not find container \"5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47\": container with ID starting with 5e495ce86e85ba779e21b4d659c94b53976340695724e7df2b4478f77c660a47 not found: ID does not exist" Mar 21 10:18:51 crc kubenswrapper[4932]: I0321 10:18:51.723595 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" path="/var/lib/kubelet/pods/69713df1-f374-455e-8cb2-eef32394d3ca/volumes" Mar 21 10:18:55 crc kubenswrapper[4932]: I0321 10:18:55.793254 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:55 crc kubenswrapper[4932]: I0321 10:18:55.849207 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:56 crc kubenswrapper[4932]: I0321 10:18:56.031006 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54s68"] Mar 21 10:18:56 crc kubenswrapper[4932]: I0321 10:18:56.913081 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-54s68" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="registry-server" containerID="cri-o://85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef" gracePeriod=2 Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.373644 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.428972 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-utilities\") pod \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.429256 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-catalog-content\") pod \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.429303 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpg8j\" (UniqueName: \"kubernetes.io/projected/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-kube-api-access-zpg8j\") pod \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\" (UID: \"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4\") " Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.429889 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-utilities" (OuterVolumeSpecName: "utilities") pod "9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" (UID: "9712f8be-ceb5-44ae-aa65-5ab66fbff3e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.447507 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-kube-api-access-zpg8j" (OuterVolumeSpecName: "kube-api-access-zpg8j") pod "9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" (UID: "9712f8be-ceb5-44ae-aa65-5ab66fbff3e4"). InnerVolumeSpecName "kube-api-access-zpg8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.532587 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.532632 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpg8j\" (UniqueName: \"kubernetes.io/projected/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-kube-api-access-zpg8j\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.583955 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" (UID: "9712f8be-ceb5-44ae-aa65-5ab66fbff3e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.634122 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.923483 4932 generic.go:334] "Generic (PLEG): container finished" podID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerID="85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef" exitCode=0 Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.923553 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54s68" event={"ID":"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4","Type":"ContainerDied","Data":"85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef"} Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.923587 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54s68" event={"ID":"9712f8be-ceb5-44ae-aa65-5ab66fbff3e4","Type":"ContainerDied","Data":"fe4283bd2dee29071894523d84adc8c9c4d6ccaf57617cf4874b2ab701f7a6d3"} Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.923606 4932 scope.go:117] "RemoveContainer" containerID="85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.923761 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54s68" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.927766 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" exitCode=1 Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.927811 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc"} Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.928705 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:18:57 crc kubenswrapper[4932]: E0321 10:18:57.928934 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.946848 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54s68"] Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.948550 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.948578 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.948587 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.948616 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.956057 4932 scope.go:117] "RemoveContainer" containerID="7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4" Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.962566 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-54s68"] Mar 21 10:18:57 crc kubenswrapper[4932]: I0321 10:18:57.976958 4932 scope.go:117] "RemoveContainer" containerID="96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3" Mar 21 10:18:58 crc kubenswrapper[4932]: I0321 10:18:58.022449 4932 scope.go:117] "RemoveContainer" containerID="85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef" Mar 21 10:18:58 crc kubenswrapper[4932]: E0321 10:18:58.023281 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef\": container with ID starting with 85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef not found: ID does not exist" containerID="85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef" Mar 21 10:18:58 crc kubenswrapper[4932]: I0321 10:18:58.023332 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef"} err="failed to get container status \"85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef\": rpc error: code = NotFound desc = could not find container \"85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef\": container with ID starting with 85ce8d5aca98b71ddcc12a59582c46270be7c70b8b8b811fb1b6af4a4bcca1ef not found: ID does not exist" Mar 21 10:18:58 crc kubenswrapper[4932]: I0321 10:18:58.023515 4932 scope.go:117] "RemoveContainer" containerID="7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4" Mar 21 10:18:58 crc kubenswrapper[4932]: E0321 10:18:58.023933 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4\": container with ID starting with 7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4 not found: ID does not exist" containerID="7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4" Mar 21 10:18:58 crc kubenswrapper[4932]: I0321 10:18:58.023985 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4"} err="failed to get container status \"7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4\": rpc error: code = NotFound desc = could not find container \"7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4\": container with ID starting with 7366cde39309c35dd62c1bf36034329ba85e5f3b6b8ada6b9228b2edeba94ba4 not found: ID does not exist" Mar 21 10:18:58 crc kubenswrapper[4932]: I0321 10:18:58.024013 4932 scope.go:117] "RemoveContainer" containerID="96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3" Mar 21 10:18:58 crc kubenswrapper[4932]: E0321 10:18:58.024484 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3\": container with ID starting with 96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3 not found: ID does not exist" containerID="96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3" Mar 21 10:18:58 crc kubenswrapper[4932]: I0321 10:18:58.024514 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3"} err="failed to get container status \"96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3\": rpc error: code = NotFound desc = could not find container \"96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3\": container with ID starting with 96cd3709e7463fd342cefe8d7ec6605d296a0ad964bfdd0075c3ea80db094ea3 not found: ID does not exist" Mar 21 10:18:58 crc kubenswrapper[4932]: I0321 10:18:58.024534 4932 scope.go:117] "RemoveContainer" containerID="3f29bf157b433aa98ac12f41bc978faafd3a1562dc7675580dcd2f025f841f05" Mar 21 10:18:58 crc kubenswrapper[4932]: I0321 10:18:58.942499 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:18:58 crc kubenswrapper[4932]: E0321 10:18:58.942772 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:18:59 crc kubenswrapper[4932]: I0321 10:18:59.714293 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" path="/var/lib/kubelet/pods/9712f8be-ceb5-44ae-aa65-5ab66fbff3e4/volumes" Mar 21 10:18:59 crc kubenswrapper[4932]: I0321 10:18:59.951135 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:18:59 crc kubenswrapper[4932]: E0321 10:18:59.951495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:19:00 crc kubenswrapper[4932]: I0321 10:19:00.225549 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:19:00 crc kubenswrapper[4932]: I0321 10:19:00.225861 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:19:02 crc kubenswrapper[4932]: I0321 10:19:02.702855 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:19:02 crc kubenswrapper[4932]: E0321 10:19:02.703642 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:19:14 crc kubenswrapper[4932]: I0321 10:19:14.703392 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:19:14 crc kubenswrapper[4932]: E0321 10:19:14.706751 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:19:15 crc kubenswrapper[4932]: I0321 10:19:15.702393 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:19:15 crc kubenswrapper[4932]: E0321 10:19:15.702885 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:19:23 crc kubenswrapper[4932]: I0321 10:19:23.429061 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cb57d57b8-l7z46_7d9b603e-30ad-4c88-995f-1d931e8fbb60/barbican-api/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.009663 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b4f895846-xgmln_6ac55ca5-9ef6-4157-a91e-49d312d5b2b8/barbican-keystone-listener/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.016254 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cb57d57b8-l7z46_7d9b603e-30ad-4c88-995f-1d931e8fbb60/barbican-api-log/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.052680 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b4f895846-xgmln_6ac55ca5-9ef6-4157-a91e-49d312d5b2b8/barbican-keystone-listener-log/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.217631 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779f655bb5-55qpq_9c85be66-46fe-4830-b918-25743e5a86d2/barbican-worker/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.282374 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779f655bb5-55qpq_9c85be66-46fe-4830-b918-25743e5a86d2/barbican-worker-log/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.633482 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb6c0c4e-9d96-4c88-9db6-245c190489fa/ceilometer-notification-agent/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.657426 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb6c0c4e-9d96-4c88-9db6-245c190489fa/ceilometer-central-agent/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.744780 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb6c0c4e-9d96-4c88-9db6-245c190489fa/sg-core/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.745061 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb6c0c4e-9d96-4c88-9db6-245c190489fa/proxy-httpd/0.log" Mar 21 10:19:24 crc kubenswrapper[4932]: I0321 10:19:24.999606 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_250ff311-5acb-4def-9d7c-ead2d48f29bd/cinder-api-log/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.038282 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_250ff311-5acb-4def-9d7c-ead2d48f29bd/cinder-api/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.131650 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2cae8d2b-2318-4ec3-a291-10530a2532d5/cinder-scheduler/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.260879 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2cae8d2b-2318-4ec3-a291-10530a2532d5/probe/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.309704 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-754c945467-b2nqr_b458937d-c892-47ea-ac33-6aa0eb244b60/init/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.536705 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-754c945467-b2nqr_b458937d-c892-47ea-ac33-6aa0eb244b60/dnsmasq-dns/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.542483 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-754c945467-b2nqr_b458937d-c892-47ea-ac33-6aa0eb244b60/init/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.547075 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b615cbfa-5bab-4ab9-a5dc-220a68c4331f/glance-httpd/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.752188 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b615cbfa-5bab-4ab9-a5dc-220a68c4331f/glance-log/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.776452 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0a90781-196e-452d-9175-b390d33a495c/glance-httpd/0.log" Mar 21 10:19:25 crc kubenswrapper[4932]: I0321 10:19:25.840636 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0a90781-196e-452d-9175-b390d33a495c/glance-log/0.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.073233 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7998c44c8d-kb65g_a2137f88-2dc2-4718-bd8d-229745974b9a/horizon/16.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.125630 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7998c44c8d-kb65g_a2137f88-2dc2-4718-bd8d-229745974b9a/horizon-log/0.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.185049 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7998c44c8d-kb65g_a2137f88-2dc2-4718-bd8d-229745974b9a/horizon/16.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.405944 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-fbf6fd964-2w7xj_13285608-51c1-4307-a442-e0cd0e881385/horizon-log/0.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.420667 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-fbf6fd964-2w7xj_13285608-51c1-4307-a442-e0cd0e881385/horizon/16.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.451456 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-fbf6fd964-2w7xj_13285608-51c1-4307-a442-e0cd0e881385/horizon/16.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.671397 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29568121-t7kb5_5f6ea447-7a62-4c99-b3a6-3afd311e976b/keystone-cron/0.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.739289 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-658f888668-v6842_50adb689-8024-4fac-a9d0-8133a18de438/keystone-api/0.log" Mar 21 10:19:26 crc kubenswrapper[4932]: I0321 10:19:26.882491 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c8b0f78b-06df-4bfa-8477-b291b7787e8d/kube-state-metrics/0.log" Mar 21 10:19:27 crc kubenswrapper[4932]: I0321 10:19:27.174719 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75cc58bddf-lsnvj_5fd863c3-5f3c-4d25-96db-9a8154ecedcf/neutron-api/0.log" Mar 21 10:19:27 crc kubenswrapper[4932]: I0321 10:19:27.178289 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75cc58bddf-lsnvj_5fd863c3-5f3c-4d25-96db-9a8154ecedcf/neutron-httpd/0.log" Mar 21 10:19:27 crc kubenswrapper[4932]: I0321 10:19:27.525841 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_07d3d99e-014e-4924-827a-f3e2f87774c6/setup-container/0.log" Mar 21 10:19:27 crc kubenswrapper[4932]: I0321 10:19:27.743980 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_07d3d99e-014e-4924-827a-f3e2f87774c6/rabbitmq/0.log" Mar 21 10:19:27 crc kubenswrapper[4932]: I0321 10:19:27.776460 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_07d3d99e-014e-4924-827a-f3e2f87774c6/setup-container/0.log" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.093366 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ffbaa3bb-7e4d-4a59-8e88-1b0e518846ad/nova-cell0-conductor-conductor/0.log" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.158719 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6b875b34-e9fa-4dc6-9550-0939e59ab0c7/nova-api-log/0.log" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.353519 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6b875b34-e9fa-4dc6-9550-0939e59ab0c7/nova-api-api/0.log" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.403407 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4a8ebe60-2636-4f10-84b9-4f9056ee3323/nova-cell1-conductor-conductor/0.log" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.511122 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_489c9eb2-53f2-4e34-828f-4e294caa705e/nova-cell1-novncproxy-novncproxy/0.log" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.660720 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5e61e833-626e-406f-9a07-e4cbd2711bad/nova-metadata-log/0.log" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.702160 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:19:28 crc kubenswrapper[4932]: E0321 10:19:28.702430 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.953965 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba/mysql-bootstrap/0.log" Mar 21 10:19:28 crc kubenswrapper[4932]: I0321 10:19:28.993864 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f4d4ddf9-93f9-46c1-a01f-09bb2d170c34/nova-scheduler-scheduler/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.111194 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5e61e833-626e-406f-9a07-e4cbd2711bad/nova-metadata-metadata/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.225644 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba/galera/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.229620 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bbcaf39d-7201-4a0d-9aa1-289d33b4e2ba/mysql-bootstrap/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.346885 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_616e853e-7b43-435c-b3fd-beaaa89779ff/mysql-bootstrap/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.591496 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_616e853e-7b43-435c-b3fd-beaaa89779ff/mysql-bootstrap/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.603535 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_616e853e-7b43-435c-b3fd-beaaa89779ff/galera/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.674425 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c3e2380f-3fa6-4322-b9e2-befe6a37c754/openstackclient/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.701854 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:19:29 crc kubenswrapper[4932]: E0321 10:19:29.702098 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.817661 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-w54m5_5faa8451-6af5-4eea-ba81-732ddabb83b3/openstack-network-exporter/0.log" Mar 21 10:19:29 crc kubenswrapper[4932]: I0321 10:19:29.895142 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdvp8_9874749a-2839-4a08-bf7a-8e99d3c745a5/ovsdb-server-init/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.082195 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdvp8_9874749a-2839-4a08-bf7a-8e99d3c745a5/ovsdb-server-init/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.107891 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdvp8_9874749a-2839-4a08-bf7a-8e99d3c745a5/ovs-vswitchd/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.124318 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kdvp8_9874749a-2839-4a08-bf7a-8e99d3c745a5/ovsdb-server/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.225146 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.225467 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.315376 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vk8zs_86467dc0-186a-407d-b23b-5f1cc14a54ec/ovn-controller/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.380741 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_04747ad4-d988-4d6d-8a2e-ab0e28e2cda0/openstack-network-exporter/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.457851 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_04747ad4-d988-4d6d-8a2e-ab0e28e2cda0/ovn-northd/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.551219 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ff5058f9-6f1d-412e-a4c1-c12b67a26b41/openstack-network-exporter/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.603143 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ff5058f9-6f1d-412e-a4c1-c12b67a26b41/ovsdbserver-nb/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.776659 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eeca80df-848b-4833-96c7-f4e57ad330f7/openstack-network-exporter/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.799118 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eeca80df-848b-4833-96c7-f4e57ad330f7/ovsdbserver-sb/0.log" Mar 21 10:19:30 crc kubenswrapper[4932]: I0321 10:19:30.991701 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5bc4cc655d-wmdr9_8d312ece-9744-4dc2-be9d-1220beb02bb1/placement-api/0.log" Mar 21 10:19:31 crc kubenswrapper[4932]: I0321 10:19:31.153960 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5bc4cc655d-wmdr9_8d312ece-9744-4dc2-be9d-1220beb02bb1/placement-log/0.log" Mar 21 10:19:31 crc kubenswrapper[4932]: I0321 10:19:31.268558 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c8728f16-950d-456c-865c-87365e4bc418/init-config-reloader/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.002822 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c8728f16-950d-456c-865c-87365e4bc418/init-config-reloader/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.040084 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c8728f16-950d-456c-865c-87365e4bc418/thanos-sidecar/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.069857 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c8728f16-950d-456c-865c-87365e4bc418/prometheus/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.079777 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c8728f16-950d-456c-865c-87365e4bc418/config-reloader/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.260227 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52bf7d16-ddac-464e-aca0-7756f5a9f696/setup-container/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.538106 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52bf7d16-ddac-464e-aca0-7756f5a9f696/rabbitmq/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.546531 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52bf7d16-ddac-464e-aca0-7756f5a9f696/setup-container/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.586757 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_debe0e76-6d3a-402f-af21-a3ba7ceb5a24/setup-container/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.835712 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_debe0e76-6d3a-402f-af21-a3ba7ceb5a24/setup-container/0.log" Mar 21 10:19:32 crc kubenswrapper[4932]: I0321 10:19:32.880518 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_debe0e76-6d3a-402f-af21-a3ba7ceb5a24/rabbitmq/0.log" Mar 21 10:19:33 crc kubenswrapper[4932]: I0321 10:19:33.006370 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55b6df9899-kzdvb_13cb95e4-69d8-4acf-9b49-8da6aed86089/proxy-httpd/0.log" Mar 21 10:19:33 crc kubenswrapper[4932]: I0321 10:19:33.715650 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55b6df9899-kzdvb_13cb95e4-69d8-4acf-9b49-8da6aed86089/proxy-server/0.log" Mar 21 10:19:33 crc kubenswrapper[4932]: I0321 10:19:33.733649 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-rxcdd_59f360d4-96d6-4693-a5f8-2473c4d55eca/swift-ring-rebalance/0.log" Mar 21 10:19:33 crc kubenswrapper[4932]: I0321 10:19:33.932998 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/account-reaper/0.log" Mar 21 10:19:33 crc kubenswrapper[4932]: I0321 10:19:33.946482 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/account-replicator/0.log" Mar 21 10:19:33 crc kubenswrapper[4932]: I0321 10:19:33.994998 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/account-auditor/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.015836 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/account-server/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.153844 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/container-auditor/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.185326 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/container-updater/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.226018 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/container-server/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.231944 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/container-replicator/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.361484 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/object-auditor/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.433505 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/object-expirer/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.481882 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/object-replicator/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.483011 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/object-server/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.569181 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/object-updater/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.621564 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/rsync/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.695620 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a350804d-f44d-4a1c-b748-24af07a9e811/swift-recon-cron/0.log" Mar 21 10:19:34 crc kubenswrapper[4932]: I0321 10:19:34.939105 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c75f5feb-4823-4552-a315-ea0e197ba158/watcher-api-log/0.log" Mar 21 10:19:35 crc kubenswrapper[4932]: I0321 10:19:35.154787 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_f1c6cb87-a70a-4f04-802c-2d33d5449350/watcher-applier/0.log" Mar 21 10:19:35 crc kubenswrapper[4932]: I0321 10:19:35.482835 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_ce099b91-a4a0-4e8b-887d-6680f656ad71/watcher-decision-engine/0.log" Mar 21 10:19:37 crc kubenswrapper[4932]: I0321 10:19:37.367666 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_c75f5feb-4823-4552-a315-ea0e197ba158/watcher-api/0.log" Mar 21 10:19:39 crc kubenswrapper[4932]: I0321 10:19:39.702224 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:19:39 crc kubenswrapper[4932]: E0321 10:19:39.702653 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:19:43 crc kubenswrapper[4932]: I0321 10:19:43.201139 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9b1d914e-2d0b-4aa6-a863-04496a5acb61/memcached/0.log" Mar 21 10:19:43 crc kubenswrapper[4932]: I0321 10:19:43.703123 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:19:43 crc kubenswrapper[4932]: E0321 10:19:43.703380 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:19:53 crc kubenswrapper[4932]: I0321 10:19:53.703319 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:19:53 crc kubenswrapper[4932]: E0321 10:19:53.704154 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:19:57 crc kubenswrapper[4932]: I0321 10:19:57.708679 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:19:57 crc kubenswrapper[4932]: E0321 10:19:57.709412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.140519 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568140-555nb"] Mar 21 10:20:00 crc kubenswrapper[4932]: E0321 10:20:00.141312 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="extract-utilities" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.141331 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="extract-utilities" Mar 21 10:20:00 crc kubenswrapper[4932]: E0321 10:20:00.141379 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="extract-content" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.141389 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="extract-content" Mar 21 10:20:00 crc kubenswrapper[4932]: E0321 10:20:00.141401 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="registry-server" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.141411 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="registry-server" Mar 21 10:20:00 crc kubenswrapper[4932]: E0321 10:20:00.141437 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="extract-utilities" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.141445 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="extract-utilities" Mar 21 10:20:00 crc kubenswrapper[4932]: E0321 10:20:00.141457 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="extract-content" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.141464 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="extract-content" Mar 21 10:20:00 crc kubenswrapper[4932]: E0321 10:20:00.141480 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="registry-server" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.141487 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="registry-server" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.141712 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9712f8be-ceb5-44ae-aa65-5ab66fbff3e4" containerName="registry-server" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.141730 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="69713df1-f374-455e-8cb2-eef32394d3ca" containerName="registry-server" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.142612 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568140-555nb" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.144485 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.146789 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.149159 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.149319 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568140-555nb"] Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.204956 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhmf\" (UniqueName: \"kubernetes.io/projected/fa70f328-7d68-4b42-bef3-f28be74a015c-kube-api-access-szhmf\") pod \"auto-csr-approver-29568140-555nb\" (UID: \"fa70f328-7d68-4b42-bef3-f28be74a015c\") " pod="openshift-infra/auto-csr-approver-29568140-555nb" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.225861 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.225917 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.225962 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.226711 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fec860383151efbde4fb71b965ba9220d9ab3cfffbe09f8942e6ba8572764e4"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.226781 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://8fec860383151efbde4fb71b965ba9220d9ab3cfffbe09f8942e6ba8572764e4" gracePeriod=600 Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.300181 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-4jn2b_0f39b226-69b5-4dcf-b4eb-f92a2fc10261/manager/0.log" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.306386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szhmf\" (UniqueName: \"kubernetes.io/projected/fa70f328-7d68-4b42-bef3-f28be74a015c-kube-api-access-szhmf\") pod \"auto-csr-approver-29568140-555nb\" (UID: \"fa70f328-7d68-4b42-bef3-f28be74a015c\") " pod="openshift-infra/auto-csr-approver-29568140-555nb" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.531990 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szhmf\" (UniqueName: \"kubernetes.io/projected/fa70f328-7d68-4b42-bef3-f28be74a015c-kube-api-access-szhmf\") pod \"auto-csr-approver-29568140-555nb\" (UID: \"fa70f328-7d68-4b42-bef3-f28be74a015c\") " pod="openshift-infra/auto-csr-approver-29568140-555nb" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.719841 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v_6bc25096-fa48-42e9-9984-d44fbe344949/util/0.log" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.762874 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568140-555nb" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.891418 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v_6bc25096-fa48-42e9-9984-d44fbe344949/util/0.log" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.911256 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v_6bc25096-fa48-42e9-9984-d44fbe344949/pull/0.log" Mar 21 10:20:00 crc kubenswrapper[4932]: I0321 10:20:00.933230 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v_6bc25096-fa48-42e9-9984-d44fbe344949/pull/0.log" Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.065025 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v_6bc25096-fa48-42e9-9984-d44fbe344949/pull/0.log" Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.103396 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v_6bc25096-fa48-42e9-9984-d44fbe344949/util/0.log" Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.119812 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7f34f0d6f554f586e0b919ac15e84db227eddd567f57c4059b9e3aff3wcx8v_6bc25096-fa48-42e9-9984-d44fbe344949/extract/0.log" Mar 21 10:20:01 crc kubenswrapper[4932]: W0321 10:20:01.232755 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa70f328_7d68_4b42_bef3_f28be74a015c.slice/crio-1c9b1c61145bb7f7db90b223e027e202ea43a1bc28b191614cc9e4da07dcb561 WatchSource:0}: Error finding container 1c9b1c61145bb7f7db90b223e027e202ea43a1bc28b191614cc9e4da07dcb561: Status 404 returned error can't find the container with id 1c9b1c61145bb7f7db90b223e027e202ea43a1bc28b191614cc9e4da07dcb561 Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.234812 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568140-555nb"] Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.337609 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-n42t7_d6b79abd-6b81-44e4-89dd-3743dd7cc389/manager/0.log" Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.455975 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="8fec860383151efbde4fb71b965ba9220d9ab3cfffbe09f8942e6ba8572764e4" exitCode=0 Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.456037 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"8fec860383151efbde4fb71b965ba9220d9ab3cfffbe09f8942e6ba8572764e4"} Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.456069 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8"} Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.456087 4932 scope.go:117] "RemoveContainer" containerID="67bf3e1d1e6fdf3ce56e2b49450b2c9cb3e3a06a4b62db173ecb95e01ca7be90" Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.458858 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568140-555nb" event={"ID":"fa70f328-7d68-4b42-bef3-f28be74a015c","Type":"ContainerStarted","Data":"1c9b1c61145bb7f7db90b223e027e202ea43a1bc28b191614cc9e4da07dcb561"} Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.643949 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-mktkt_548b3963-aca5-475e-b79d-7d9870d11155/manager/0.log" Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.702075 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-kbdqg_ac0ffb78-05f2-4c27-a5c3-6020714b1792/manager/0.log" Mar 21 10:20:01 crc kubenswrapper[4932]: I0321 10:20:01.968699 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-r8qnx_015c7bce-9d22-47ec-90ac-049bbba07d7e/manager/0.log" Mar 21 10:20:02 crc kubenswrapper[4932]: I0321 10:20:02.539290 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7ffb6b7cdc-zf69x_49d38fa1-5a18-49d6-92e0-47942d410eba/manager/0.log" Mar 21 10:20:02 crc kubenswrapper[4932]: I0321 10:20:02.785428 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-dt528_8356f354-c7d8-4c5c-9db0-c5e458971c5d/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.044940 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-gjtn7_1e3fd98f-c5ac-4087-9a3d-a3aec1241774/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.073590 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-m25c9_a3b11074-e78d-4f10-890f-d0c9dc1b4d46/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.187835 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-b6g8p_bc611f60-9bae-4e2a-a6f9-7f88221b7464/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.306178 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-b4jmj_ad2f0a32-8261-4292-a063-13b65ab4ffe8/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.423852 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-xshm8_e041327c-c039-44fc-8fa6-c7e606e1bc56/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.479975 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568140-555nb" event={"ID":"fa70f328-7d68-4b42-bef3-f28be74a015c","Type":"ContainerStarted","Data":"8a2c1ec241cd078db91212a59bd84d062968b7d75bfe97b0050c6871d24bd917"} Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.510329 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29568140-555nb" podStartSLOduration=2.392318679 podStartE2EDuration="3.510306668s" podCreationTimestamp="2026-03-21 10:20:00 +0000 UTC" firstStartedPulling="2026-03-21 10:20:01.236515136 +0000 UTC m=+4904.831713405" lastFinishedPulling="2026-03-21 10:20:02.354503125 +0000 UTC m=+4905.949701394" observedRunningTime="2026-03-21 10:20:03.503509234 +0000 UTC m=+4907.098707503" watchObservedRunningTime="2026-03-21 10:20:03.510306668 +0000 UTC m=+4907.105504937" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.551078 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-nlr7d_1e153c9d-de71-4eea-9b33-4713472b3431/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.613554 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-6hl6p_23de4e40-dece-4cf2-a0f2-60fdcd2c7588/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.742843 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-89d64c458-xg5gn_66456873-3ce6-4ccc-bc44-ef45d9c30821/manager/0.log" Mar 21 10:20:03 crc kubenswrapper[4932]: I0321 10:20:03.877374 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5787cd5f5b-j28mj_5e38ddbb-6b03-4b21-8d53-21852227d6bf/operator/0.log" Mar 21 10:20:04 crc kubenswrapper[4932]: I0321 10:20:04.162792 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-bspz8_99fe35fa-b4ab-4e2d-8e2f-bfcbe7a54e4e/registry-server/0.log" Mar 21 10:20:04 crc kubenswrapper[4932]: I0321 10:20:04.436535 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-5dgnz_9dc780d5-ff2a-4d92-ba79-076f72964907/manager/0.log" Mar 21 10:20:04 crc kubenswrapper[4932]: I0321 10:20:04.490113 4932 generic.go:334] "Generic (PLEG): container finished" podID="fa70f328-7d68-4b42-bef3-f28be74a015c" containerID="8a2c1ec241cd078db91212a59bd84d062968b7d75bfe97b0050c6871d24bd917" exitCode=0 Mar 21 10:20:04 crc kubenswrapper[4932]: I0321 10:20:04.490166 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568140-555nb" event={"ID":"fa70f328-7d68-4b42-bef3-f28be74a015c","Type":"ContainerDied","Data":"8a2c1ec241cd078db91212a59bd84d062968b7d75bfe97b0050c6871d24bd917"} Mar 21 10:20:04 crc kubenswrapper[4932]: I0321 10:20:04.575429 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-gh8vr_15c187a9-b91f-4418-a63e-20fd4f52de2f/manager/0.log" Mar 21 10:20:04 crc kubenswrapper[4932]: I0321 10:20:04.664271 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tlpd9_70cf1e28-3bbf-4c35-a013-e1a5bfab962f/operator/0.log" Mar 21 10:20:04 crc kubenswrapper[4932]: I0321 10:20:04.844737 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5c7b6d4df4-285rf_598a1e12-b105-41b7-93b5-123bd4f38dd9/manager/0.log" Mar 21 10:20:04 crc kubenswrapper[4932]: I0321 10:20:04.888979 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-w84fw_137f9007-91ad-4db2-bd97-4fdb99ba1ecb/manager/0.log" Mar 21 10:20:05 crc kubenswrapper[4932]: I0321 10:20:05.091440 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6b694c5-2qld2_e5984b5c-66b6-4141-ae47-b0b92f487355/manager/0.log" Mar 21 10:20:05 crc kubenswrapper[4932]: I0321 10:20:05.121676 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-dbgzf_83f39e87-13b5-4282-8d4f-820f1f80931b/manager/0.log" Mar 21 10:20:05 crc kubenswrapper[4932]: I0321 10:20:05.225564 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7d7cfb649d-wl9g6_49da73f0-68f0-4d58-954a-f0c7132f3e9f/manager/0.log" Mar 21 10:20:05 crc kubenswrapper[4932]: I0321 10:20:05.932691 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568140-555nb" Mar 21 10:20:06 crc kubenswrapper[4932]: I0321 10:20:06.018416 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szhmf\" (UniqueName: \"kubernetes.io/projected/fa70f328-7d68-4b42-bef3-f28be74a015c-kube-api-access-szhmf\") pod \"fa70f328-7d68-4b42-bef3-f28be74a015c\" (UID: \"fa70f328-7d68-4b42-bef3-f28be74a015c\") " Mar 21 10:20:06 crc kubenswrapper[4932]: I0321 10:20:06.030100 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa70f328-7d68-4b42-bef3-f28be74a015c-kube-api-access-szhmf" (OuterVolumeSpecName: "kube-api-access-szhmf") pod "fa70f328-7d68-4b42-bef3-f28be74a015c" (UID: "fa70f328-7d68-4b42-bef3-f28be74a015c"). InnerVolumeSpecName "kube-api-access-szhmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:20:06 crc kubenswrapper[4932]: I0321 10:20:06.120995 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szhmf\" (UniqueName: \"kubernetes.io/projected/fa70f328-7d68-4b42-bef3-f28be74a015c-kube-api-access-szhmf\") on node \"crc\" DevicePath \"\"" Mar 21 10:20:06 crc kubenswrapper[4932]: I0321 10:20:06.511237 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568140-555nb" event={"ID":"fa70f328-7d68-4b42-bef3-f28be74a015c","Type":"ContainerDied","Data":"1c9b1c61145bb7f7db90b223e027e202ea43a1bc28b191614cc9e4da07dcb561"} Mar 21 10:20:06 crc kubenswrapper[4932]: I0321 10:20:06.511280 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c9b1c61145bb7f7db90b223e027e202ea43a1bc28b191614cc9e4da07dcb561" Mar 21 10:20:06 crc kubenswrapper[4932]: I0321 10:20:06.511330 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568140-555nb" Mar 21 10:20:07 crc kubenswrapper[4932]: I0321 10:20:07.021035 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568134-nv42z"] Mar 21 10:20:07 crc kubenswrapper[4932]: I0321 10:20:07.029865 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568134-nv42z"] Mar 21 10:20:07 crc kubenswrapper[4932]: I0321 10:20:07.714706 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0796a1db-2603-49e5-be3a-2aa0dcc792b5" path="/var/lib/kubelet/pods/0796a1db-2603-49e5-be3a-2aa0dcc792b5/volumes" Mar 21 10:20:08 crc kubenswrapper[4932]: I0321 10:20:08.703291 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:20:08 crc kubenswrapper[4932]: E0321 10:20:08.703580 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:20:09 crc kubenswrapper[4932]: I0321 10:20:09.703079 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:20:09 crc kubenswrapper[4932]: E0321 10:20:09.703363 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:20:21 crc kubenswrapper[4932]: I0321 10:20:21.702472 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:20:21 crc kubenswrapper[4932]: E0321 10:20:21.703086 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:20:23 crc kubenswrapper[4932]: I0321 10:20:23.941245 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-q6jcs_4092f902-ab2d-4d16-a90e-f0e28265ee00/control-plane-machine-set-operator/0.log" Mar 21 10:20:24 crc kubenswrapper[4932]: I0321 10:20:24.119479 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bn5tw_120be070-2828-4e64-ac15-e20d8eb7a59c/kube-rbac-proxy/0.log" Mar 21 10:20:24 crc kubenswrapper[4932]: I0321 10:20:24.158806 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bn5tw_120be070-2828-4e64-ac15-e20d8eb7a59c/machine-api-operator/0.log" Mar 21 10:20:24 crc kubenswrapper[4932]: I0321 10:20:24.702723 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:20:24 crc kubenswrapper[4932]: E0321 10:20:24.702929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:20:34 crc kubenswrapper[4932]: I0321 10:20:34.418876 4932 scope.go:117] "RemoveContainer" containerID="703df61816b6d6e36db52433aa7379d279ddfda7fd0a5fa95c42d1bd51d8a0fb" Mar 21 10:20:35 crc kubenswrapper[4932]: I0321 10:20:35.703128 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:20:35 crc kubenswrapper[4932]: E0321 10:20:35.703643 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:20:36 crc kubenswrapper[4932]: I0321 10:20:36.985121 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-j2plc_cab06b9d-4bb0-40e9-93b7-9448b2d47467/cert-manager-controller/0.log" Mar 21 10:20:37 crc kubenswrapper[4932]: I0321 10:20:37.165132 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-l5m8r_73bfb425-466a-4886-ac74-3fa588f4eb32/cert-manager-cainjector/0.log" Mar 21 10:20:37 crc kubenswrapper[4932]: I0321 10:20:37.209022 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-fdnwp_53b6ef69-81be-4a78-9f72-c0464ac4b003/cert-manager-webhook/0.log" Mar 21 10:20:37 crc kubenswrapper[4932]: I0321 10:20:37.710524 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:20:37 crc kubenswrapper[4932]: E0321 10:20:37.710755 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:20:48 crc kubenswrapper[4932]: I0321 10:20:48.702523 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:20:48 crc kubenswrapper[4932]: E0321 10:20:48.703460 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:20:49 crc kubenswrapper[4932]: I0321 10:20:49.393281 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-6npgh_e9cb520c-96ae-4782-bd71-060a6de3c212/nmstate-console-plugin/0.log" Mar 21 10:20:49 crc kubenswrapper[4932]: I0321 10:20:49.566683 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mtqtd_0fa9e21c-b5c2-48ee-9b5c-4c8fe2893775/nmstate-handler/0.log" Mar 21 10:20:49 crc kubenswrapper[4932]: I0321 10:20:49.614070 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-2j2f2_701f4786-9ccc-4178-a2f7-b88ec63c7a81/kube-rbac-proxy/0.log" Mar 21 10:20:49 crc kubenswrapper[4932]: I0321 10:20:49.719589 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-2j2f2_701f4786-9ccc-4178-a2f7-b88ec63c7a81/nmstate-metrics/0.log" Mar 21 10:20:49 crc kubenswrapper[4932]: I0321 10:20:49.780920 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-pw2wf_6cbaeae6-d897-4485-9378-5370ce57234e/nmstate-operator/0.log" Mar 21 10:20:49 crc kubenswrapper[4932]: I0321 10:20:49.914462 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-vj7tk_d335f065-a5e0-46ca-b99c-61b2b2dbb3ea/nmstate-webhook/0.log" Mar 21 10:20:50 crc kubenswrapper[4932]: I0321 10:20:50.702707 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:20:50 crc kubenswrapper[4932]: E0321 10:20:50.703679 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:21:01 crc kubenswrapper[4932]: I0321 10:21:01.703662 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:21:01 crc kubenswrapper[4932]: E0321 10:21:01.704205 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:21:02 crc kubenswrapper[4932]: I0321 10:21:02.164296 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-wchrv_50126757-824d-4f6b-8427-1cf4299adc5c/prometheus-operator/0.log" Mar 21 10:21:02 crc kubenswrapper[4932]: I0321 10:21:02.334778 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-974755686-dcxqd_8e3f6352-4f16-48d4-b2d4-aaf334ba7521/prometheus-operator-admission-webhook/0.log" Mar 21 10:21:02 crc kubenswrapper[4932]: I0321 10:21:02.369774 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-974755686-rlk58_3553f81f-1424-4c1b-8c98-b05f0e2103c4/prometheus-operator-admission-webhook/0.log" Mar 21 10:21:02 crc kubenswrapper[4932]: I0321 10:21:02.545171 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-dfktt_e09365ab-596d-4dcf-b3d1-5bba08ab9f43/operator/0.log" Mar 21 10:21:02 crc kubenswrapper[4932]: I0321 10:21:02.568856 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-7447db6c6c-r2ttb_a654ea14-9551-467e-bdb5-024465a33224/perses-operator/0.log" Mar 21 10:21:02 crc kubenswrapper[4932]: I0321 10:21:02.702554 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:21:02 crc kubenswrapper[4932]: E0321 10:21:02.702809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.413256 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-s29rd_6e14cede-ac29-4c58-954c-43b97f2d2d0e/kube-rbac-proxy/0.log" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.539247 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-s29rd_6e14cede-ac29-4c58-954c-43b97f2d2d0e/controller/0.log" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.596798 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-frr-files/0.log" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.704214 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:21:16 crc kubenswrapper[4932]: E0321 10:21:16.704708 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.784903 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-frr-files/0.log" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.797707 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-reloader/0.log" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.809199 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-metrics/0.log" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.837213 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-reloader/0.log" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.994944 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-metrics/0.log" Mar 21 10:21:16 crc kubenswrapper[4932]: I0321 10:21:16.995106 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-reloader/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.043320 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-frr-files/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.071326 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-metrics/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.201513 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-reloader/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.236635 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-metrics/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.237014 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/cp-frr-files/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.258234 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/controller/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.410168 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/frr-metrics/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.442696 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/kube-rbac-proxy/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.469695 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/kube-rbac-proxy-frr/0.log" Mar 21 10:21:17 crc kubenswrapper[4932]: I0321 10:21:17.710236 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:21:17 crc kubenswrapper[4932]: E0321 10:21:17.710799 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:21:18 crc kubenswrapper[4932]: I0321 10:21:18.215694 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/reloader/0.log" Mar 21 10:21:18 crc kubenswrapper[4932]: I0321 10:21:18.285132 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-2rjd6_36c9b068-cfad-4fb8-80e6-04eec1b1a4a6/frr-k8s-webhook-server/0.log" Mar 21 10:21:18 crc kubenswrapper[4932]: I0321 10:21:18.507390 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-89c599fbb-6tg96_7a2f426b-c3bb-4248-8a49-fba11b225c08/manager/0.log" Mar 21 10:21:18 crc kubenswrapper[4932]: I0321 10:21:18.679171 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58bf4cc6c7-jtsrl_d361ba9f-cc61-4e3c-a206-0ac2bc5ac090/webhook-server/0.log" Mar 21 10:21:18 crc kubenswrapper[4932]: I0321 10:21:18.786763 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jzgr4_d489024b-f8ca-4976-aa77-a2809312901d/kube-rbac-proxy/0.log" Mar 21 10:21:19 crc kubenswrapper[4932]: I0321 10:21:19.125733 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bkzvk_6cfdb03a-bd2e-40bf-b05a-ed5e5153c5f0/frr/0.log" Mar 21 10:21:19 crc kubenswrapper[4932]: I0321 10:21:19.342269 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jzgr4_d489024b-f8ca-4976-aa77-a2809312901d/speaker/0.log" Mar 21 10:21:29 crc kubenswrapper[4932]: I0321 10:21:29.703384 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:21:29 crc kubenswrapper[4932]: E0321 10:21:29.704113 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:21:30 crc kubenswrapper[4932]: I0321 10:21:30.703090 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:21:30 crc kubenswrapper[4932]: E0321 10:21:30.703541 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.067054 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj_f7fbd2ff-b5dc-4a28-8651-841f850a8099/util/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.266329 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj_f7fbd2ff-b5dc-4a28-8651-841f850a8099/pull/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.276699 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj_f7fbd2ff-b5dc-4a28-8651-841f850a8099/util/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.344528 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj_f7fbd2ff-b5dc-4a28-8651-841f850a8099/pull/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.494423 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj_f7fbd2ff-b5dc-4a28-8651-841f850a8099/util/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.495906 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj_f7fbd2ff-b5dc-4a28-8651-841f850a8099/extract/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.549700 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8749czjj_f7fbd2ff-b5dc-4a28-8651-841f850a8099/pull/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.711756 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx_b38770bf-4b6f-49e8-9295-3bbe5014817e/util/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.834404 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx_b38770bf-4b6f-49e8-9295-3bbe5014817e/util/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.892534 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx_b38770bf-4b6f-49e8-9295-3bbe5014817e/pull/0.log" Mar 21 10:21:31 crc kubenswrapper[4932]: I0321 10:21:31.933257 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx_b38770bf-4b6f-49e8-9295-3bbe5014817e/pull/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.132896 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx_b38770bf-4b6f-49e8-9295-3bbe5014817e/extract/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.167004 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx_b38770bf-4b6f-49e8-9295-3bbe5014817e/pull/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.192783 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1lnttx_b38770bf-4b6f-49e8-9295-3bbe5014817e/util/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.487771 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_02cf18c7-ac8d-4afb-9594-ea0675338c9a/util/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.702738 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_02cf18c7-ac8d-4afb-9594-ea0675338c9a/pull/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.710910 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_02cf18c7-ac8d-4afb-9594-ea0675338c9a/util/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.734918 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_02cf18c7-ac8d-4afb-9594-ea0675338c9a/pull/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.931377 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_02cf18c7-ac8d-4afb-9594-ea0675338c9a/util/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.936696 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_02cf18c7-ac8d-4afb-9594-ea0675338c9a/pull/0.log" Mar 21 10:21:32 crc kubenswrapper[4932]: I0321 10:21:32.951767 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726ftjgc_02cf18c7-ac8d-4afb-9594-ea0675338c9a/extract/0.log" Mar 21 10:21:33 crc kubenswrapper[4932]: I0321 10:21:33.144424 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflgp_3012669a-397a-40bb-84a1-d53f4d3bb944/extract-utilities/0.log" Mar 21 10:21:33 crc kubenswrapper[4932]: I0321 10:21:33.260658 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflgp_3012669a-397a-40bb-84a1-d53f4d3bb944/extract-utilities/0.log" Mar 21 10:21:33 crc kubenswrapper[4932]: I0321 10:21:33.300888 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflgp_3012669a-397a-40bb-84a1-d53f4d3bb944/extract-content/0.log" Mar 21 10:21:33 crc kubenswrapper[4932]: I0321 10:21:33.350189 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflgp_3012669a-397a-40bb-84a1-d53f4d3bb944/extract-content/0.log" Mar 21 10:21:33 crc kubenswrapper[4932]: I0321 10:21:33.525691 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflgp_3012669a-397a-40bb-84a1-d53f4d3bb944/extract-utilities/0.log" Mar 21 10:21:33 crc kubenswrapper[4932]: I0321 10:21:33.582756 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflgp_3012669a-397a-40bb-84a1-d53f4d3bb944/extract-content/0.log" Mar 21 10:21:33 crc kubenswrapper[4932]: I0321 10:21:33.803007 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h44lz_184f08a1-0394-4048-aadd-4bce2dfbd1e5/extract-utilities/0.log" Mar 21 10:21:34 crc kubenswrapper[4932]: I0321 10:21:34.080824 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h44lz_184f08a1-0394-4048-aadd-4bce2dfbd1e5/extract-utilities/0.log" Mar 21 10:21:34 crc kubenswrapper[4932]: I0321 10:21:34.140712 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h44lz_184f08a1-0394-4048-aadd-4bce2dfbd1e5/extract-content/0.log" Mar 21 10:21:34 crc kubenswrapper[4932]: I0321 10:21:34.164927 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h44lz_184f08a1-0394-4048-aadd-4bce2dfbd1e5/extract-content/0.log" Mar 21 10:21:34 crc kubenswrapper[4932]: I0321 10:21:34.221983 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflgp_3012669a-397a-40bb-84a1-d53f4d3bb944/registry-server/0.log" Mar 21 10:21:34 crc kubenswrapper[4932]: I0321 10:21:34.320427 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h44lz_184f08a1-0394-4048-aadd-4bce2dfbd1e5/extract-content/0.log" Mar 21 10:21:34 crc kubenswrapper[4932]: I0321 10:21:34.359426 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h44lz_184f08a1-0394-4048-aadd-4bce2dfbd1e5/extract-utilities/0.log" Mar 21 10:21:34 crc kubenswrapper[4932]: I0321 10:21:34.591449 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-x2nnj_a43728c7-245f-4a2e-8182-613692389bac/marketplace-operator/0.log" Mar 21 10:21:34 crc kubenswrapper[4932]: I0321 10:21:34.882406 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gqjtm_09f9605a-9fcc-4483-870a-e5075598662e/extract-utilities/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.122640 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gqjtm_09f9605a-9fcc-4483-870a-e5075598662e/extract-utilities/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.160620 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gqjtm_09f9605a-9fcc-4483-870a-e5075598662e/extract-content/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.161566 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gqjtm_09f9605a-9fcc-4483-870a-e5075598662e/extract-content/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.225043 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h44lz_184f08a1-0394-4048-aadd-4bce2dfbd1e5/registry-server/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.359922 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gqjtm_09f9605a-9fcc-4483-870a-e5075598662e/extract-utilities/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.481779 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gqjtm_09f9605a-9fcc-4483-870a-e5075598662e/extract-content/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.502615 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gqjtm_09f9605a-9fcc-4483-870a-e5075598662e/registry-server/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.594938 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7s5fk_3f2bb9ef-e246-4b13-a8a4-a9d2135fb743/extract-utilities/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.771432 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7s5fk_3f2bb9ef-e246-4b13-a8a4-a9d2135fb743/extract-utilities/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.780222 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7s5fk_3f2bb9ef-e246-4b13-a8a4-a9d2135fb743/extract-content/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.793922 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7s5fk_3f2bb9ef-e246-4b13-a8a4-a9d2135fb743/extract-content/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.967901 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7s5fk_3f2bb9ef-e246-4b13-a8a4-a9d2135fb743/extract-content/0.log" Mar 21 10:21:35 crc kubenswrapper[4932]: I0321 10:21:35.972631 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7s5fk_3f2bb9ef-e246-4b13-a8a4-a9d2135fb743/extract-utilities/0.log" Mar 21 10:21:36 crc kubenswrapper[4932]: I0321 10:21:36.494751 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7s5fk_3f2bb9ef-e246-4b13-a8a4-a9d2135fb743/registry-server/0.log" Mar 21 10:21:43 crc kubenswrapper[4932]: I0321 10:21:43.702552 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:21:43 crc kubenswrapper[4932]: I0321 10:21:43.703259 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:21:43 crc kubenswrapper[4932]: E0321 10:21:43.703387 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:21:43 crc kubenswrapper[4932]: E0321 10:21:43.703540 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:21:47 crc kubenswrapper[4932]: I0321 10:21:47.887052 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-974755686-dcxqd_8e3f6352-4f16-48d4-b2d4-aaf334ba7521/prometheus-operator-admission-webhook/0.log" Mar 21 10:21:47 crc kubenswrapper[4932]: I0321 10:21:47.908859 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-974755686-rlk58_3553f81f-1424-4c1b-8c98-b05f0e2103c4/prometheus-operator-admission-webhook/0.log" Mar 21 10:21:47 crc kubenswrapper[4932]: I0321 10:21:47.966168 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-wchrv_50126757-824d-4f6b-8427-1cf4299adc5c/prometheus-operator/0.log" Mar 21 10:21:48 crc kubenswrapper[4932]: I0321 10:21:48.117360 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-dfktt_e09365ab-596d-4dcf-b3d1-5bba08ab9f43/operator/0.log" Mar 21 10:21:48 crc kubenswrapper[4932]: I0321 10:21:48.152242 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-7447db6c6c-r2ttb_a654ea14-9551-467e-bdb5-024465a33224/perses-operator/0.log" Mar 21 10:21:54 crc kubenswrapper[4932]: I0321 10:21:54.702235 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:21:54 crc kubenswrapper[4932]: E0321 10:21:54.703014 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:21:58 crc kubenswrapper[4932]: I0321 10:21:58.716762 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:21:58 crc kubenswrapper[4932]: E0321 10:21:58.718228 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.329873 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568142-v92bl"] Mar 21 10:22:00 crc kubenswrapper[4932]: E0321 10:22:00.330644 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa70f328-7d68-4b42-bef3-f28be74a015c" containerName="oc" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.330658 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa70f328-7d68-4b42-bef3-f28be74a015c" containerName="oc" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.330882 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa70f328-7d68-4b42-bef3-f28be74a015c" containerName="oc" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.331636 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568142-v92bl" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.334605 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.335054 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.338253 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.358896 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568142-v92bl"] Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.394639 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68p96\" (UniqueName: \"kubernetes.io/projected/6e3a82cd-8c49-4998-98fe-3d6511c83ce5-kube-api-access-68p96\") pod \"auto-csr-approver-29568142-v92bl\" (UID: \"6e3a82cd-8c49-4998-98fe-3d6511c83ce5\") " pod="openshift-infra/auto-csr-approver-29568142-v92bl" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.496444 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68p96\" (UniqueName: \"kubernetes.io/projected/6e3a82cd-8c49-4998-98fe-3d6511c83ce5-kube-api-access-68p96\") pod \"auto-csr-approver-29568142-v92bl\" (UID: \"6e3a82cd-8c49-4998-98fe-3d6511c83ce5\") " pod="openshift-infra/auto-csr-approver-29568142-v92bl" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.516863 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68p96\" (UniqueName: \"kubernetes.io/projected/6e3a82cd-8c49-4998-98fe-3d6511c83ce5-kube-api-access-68p96\") pod \"auto-csr-approver-29568142-v92bl\" (UID: \"6e3a82cd-8c49-4998-98fe-3d6511c83ce5\") " pod="openshift-infra/auto-csr-approver-29568142-v92bl" Mar 21 10:22:00 crc kubenswrapper[4932]: I0321 10:22:00.690067 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568142-v92bl" Mar 21 10:22:01 crc kubenswrapper[4932]: I0321 10:22:01.376552 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568142-v92bl"] Mar 21 10:22:02 crc kubenswrapper[4932]: I0321 10:22:02.344684 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568142-v92bl" event={"ID":"6e3a82cd-8c49-4998-98fe-3d6511c83ce5","Type":"ContainerStarted","Data":"3e472ad9aaa297c1169d4d31ce95b22dc08c38986c144ceb5cbe826447139a76"} Mar 21 10:22:04 crc kubenswrapper[4932]: I0321 10:22:04.363642 4932 generic.go:334] "Generic (PLEG): container finished" podID="6e3a82cd-8c49-4998-98fe-3d6511c83ce5" containerID="41b50031c1f5723bca263865ed647994fc81623a532c85364cdd8a20f7c10249" exitCode=0 Mar 21 10:22:04 crc kubenswrapper[4932]: I0321 10:22:04.363741 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568142-v92bl" event={"ID":"6e3a82cd-8c49-4998-98fe-3d6511c83ce5","Type":"ContainerDied","Data":"41b50031c1f5723bca263865ed647994fc81623a532c85364cdd8a20f7c10249"} Mar 21 10:22:06 crc kubenswrapper[4932]: I0321 10:22:06.188462 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568142-v92bl" Mar 21 10:22:06 crc kubenswrapper[4932]: I0321 10:22:06.262560 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68p96\" (UniqueName: \"kubernetes.io/projected/6e3a82cd-8c49-4998-98fe-3d6511c83ce5-kube-api-access-68p96\") pod \"6e3a82cd-8c49-4998-98fe-3d6511c83ce5\" (UID: \"6e3a82cd-8c49-4998-98fe-3d6511c83ce5\") " Mar 21 10:22:06 crc kubenswrapper[4932]: I0321 10:22:06.283124 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3a82cd-8c49-4998-98fe-3d6511c83ce5-kube-api-access-68p96" (OuterVolumeSpecName: "kube-api-access-68p96") pod "6e3a82cd-8c49-4998-98fe-3d6511c83ce5" (UID: "6e3a82cd-8c49-4998-98fe-3d6511c83ce5"). InnerVolumeSpecName "kube-api-access-68p96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:22:06 crc kubenswrapper[4932]: I0321 10:22:06.364466 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68p96\" (UniqueName: \"kubernetes.io/projected/6e3a82cd-8c49-4998-98fe-3d6511c83ce5-kube-api-access-68p96\") on node \"crc\" DevicePath \"\"" Mar 21 10:22:06 crc kubenswrapper[4932]: I0321 10:22:06.382564 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568142-v92bl" event={"ID":"6e3a82cd-8c49-4998-98fe-3d6511c83ce5","Type":"ContainerDied","Data":"3e472ad9aaa297c1169d4d31ce95b22dc08c38986c144ceb5cbe826447139a76"} Mar 21 10:22:06 crc kubenswrapper[4932]: I0321 10:22:06.382605 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e472ad9aaa297c1169d4d31ce95b22dc08c38986c144ceb5cbe826447139a76" Mar 21 10:22:06 crc kubenswrapper[4932]: I0321 10:22:06.382610 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568142-v92bl" Mar 21 10:22:07 crc kubenswrapper[4932]: I0321 10:22:07.279325 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568136-bk5bf"] Mar 21 10:22:07 crc kubenswrapper[4932]: I0321 10:22:07.287035 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568136-bk5bf"] Mar 21 10:22:07 crc kubenswrapper[4932]: I0321 10:22:07.715743 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e75f1d8-dcbe-4800-8400-02267ec183d3" path="/var/lib/kubelet/pods/6e75f1d8-dcbe-4800-8400-02267ec183d3/volumes" Mar 21 10:22:08 crc kubenswrapper[4932]: I0321 10:22:08.702522 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:22:08 crc kubenswrapper[4932]: E0321 10:22:08.703575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:22:10 crc kubenswrapper[4932]: E0321 10:22:10.284505 4932 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.20:57490->38.102.83.20:39993: write tcp 38.102.83.20:57490->38.102.83.20:39993: write: broken pipe Mar 21 10:22:11 crc kubenswrapper[4932]: I0321 10:22:11.702660 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:22:11 crc kubenswrapper[4932]: E0321 10:22:11.703481 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.172293 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5nmt"] Mar 21 10:22:19 crc kubenswrapper[4932]: E0321 10:22:19.173150 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3a82cd-8c49-4998-98fe-3d6511c83ce5" containerName="oc" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.173162 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3a82cd-8c49-4998-98fe-3d6511c83ce5" containerName="oc" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.173367 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3a82cd-8c49-4998-98fe-3d6511c83ce5" containerName="oc" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.174785 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.185062 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5nmt"] Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.355726 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-catalog-content\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.356033 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-utilities\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.356190 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxnn\" (UniqueName: \"kubernetes.io/projected/b163dd17-31fd-4220-b2f6-a40933f692e0-kube-api-access-rbxnn\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.457555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbxnn\" (UniqueName: \"kubernetes.io/projected/b163dd17-31fd-4220-b2f6-a40933f692e0-kube-api-access-rbxnn\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.457633 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-catalog-content\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.457666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-utilities\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.458268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-utilities\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.458837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-catalog-content\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.478053 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbxnn\" (UniqueName: \"kubernetes.io/projected/b163dd17-31fd-4220-b2f6-a40933f692e0-kube-api-access-rbxnn\") pod \"redhat-marketplace-w5nmt\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.494485 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:19 crc kubenswrapper[4932]: W0321 10:22:19.940735 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb163dd17_31fd_4220_b2f6_a40933f692e0.slice/crio-616362dd73f02aebbcb91f8948758bef20b0a959e883f21ce67936b21796e409 WatchSource:0}: Error finding container 616362dd73f02aebbcb91f8948758bef20b0a959e883f21ce67936b21796e409: Status 404 returned error can't find the container with id 616362dd73f02aebbcb91f8948758bef20b0a959e883f21ce67936b21796e409 Mar 21 10:22:19 crc kubenswrapper[4932]: I0321 10:22:19.942049 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5nmt"] Mar 21 10:22:20 crc kubenswrapper[4932]: I0321 10:22:20.506696 4932 generic.go:334] "Generic (PLEG): container finished" podID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerID="fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22" exitCode=0 Mar 21 10:22:20 crc kubenswrapper[4932]: I0321 10:22:20.506869 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5nmt" event={"ID":"b163dd17-31fd-4220-b2f6-a40933f692e0","Type":"ContainerDied","Data":"fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22"} Mar 21 10:22:20 crc kubenswrapper[4932]: I0321 10:22:20.507148 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5nmt" event={"ID":"b163dd17-31fd-4220-b2f6-a40933f692e0","Type":"ContainerStarted","Data":"616362dd73f02aebbcb91f8948758bef20b0a959e883f21ce67936b21796e409"} Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.517424 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5nmt" event={"ID":"b163dd17-31fd-4220-b2f6-a40933f692e0","Type":"ContainerStarted","Data":"3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071"} Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.566655 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qq47m"] Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.568782 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.588443 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qq47m"] Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.707084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zpzn\" (UniqueName: \"kubernetes.io/projected/76922123-2551-4a45-804e-0465640f9fc3-kube-api-access-8zpzn\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.707140 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-utilities\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.707187 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-catalog-content\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.809308 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zpzn\" (UniqueName: \"kubernetes.io/projected/76922123-2551-4a45-804e-0465640f9fc3-kube-api-access-8zpzn\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.809400 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-utilities\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.809449 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-catalog-content\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.810912 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-utilities\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.811087 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-catalog-content\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.837725 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zpzn\" (UniqueName: \"kubernetes.io/projected/76922123-2551-4a45-804e-0465640f9fc3-kube-api-access-8zpzn\") pod \"certified-operators-qq47m\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:21 crc kubenswrapper[4932]: I0321 10:22:21.908743 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:22 crc kubenswrapper[4932]: I0321 10:22:22.454702 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qq47m"] Mar 21 10:22:22 crc kubenswrapper[4932]: W0321 10:22:22.836308 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76922123_2551_4a45_804e_0465640f9fc3.slice/crio-9ecef1850abff45b492f2e21d307ce0622f6b8cc8158e91686ffe197e68d63fb WatchSource:0}: Error finding container 9ecef1850abff45b492f2e21d307ce0622f6b8cc8158e91686ffe197e68d63fb: Status 404 returned error can't find the container with id 9ecef1850abff45b492f2e21d307ce0622f6b8cc8158e91686ffe197e68d63fb Mar 21 10:22:23 crc kubenswrapper[4932]: I0321 10:22:23.535005 4932 generic.go:334] "Generic (PLEG): container finished" podID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerID="3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071" exitCode=0 Mar 21 10:22:23 crc kubenswrapper[4932]: I0321 10:22:23.535091 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5nmt" event={"ID":"b163dd17-31fd-4220-b2f6-a40933f692e0","Type":"ContainerDied","Data":"3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071"} Mar 21 10:22:23 crc kubenswrapper[4932]: I0321 10:22:23.537879 4932 generic.go:334] "Generic (PLEG): container finished" podID="76922123-2551-4a45-804e-0465640f9fc3" containerID="f4996536a381cc59bb346de7815688afb60f23fde2b2e72c5baed584d91f4c4c" exitCode=0 Mar 21 10:22:23 crc kubenswrapper[4932]: I0321 10:22:23.538015 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qq47m" event={"ID":"76922123-2551-4a45-804e-0465640f9fc3","Type":"ContainerDied","Data":"f4996536a381cc59bb346de7815688afb60f23fde2b2e72c5baed584d91f4c4c"} Mar 21 10:22:23 crc kubenswrapper[4932]: I0321 10:22:23.538800 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qq47m" event={"ID":"76922123-2551-4a45-804e-0465640f9fc3","Type":"ContainerStarted","Data":"9ecef1850abff45b492f2e21d307ce0622f6b8cc8158e91686ffe197e68d63fb"} Mar 21 10:22:23 crc kubenswrapper[4932]: I0321 10:22:23.708673 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:22:23 crc kubenswrapper[4932]: E0321 10:22:23.709017 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:22:24 crc kubenswrapper[4932]: I0321 10:22:24.548149 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qq47m" event={"ID":"76922123-2551-4a45-804e-0465640f9fc3","Type":"ContainerStarted","Data":"b04985366092888088fd0a562d84c0d391944943d61caf3d827565134f59ae5f"} Mar 21 10:22:24 crc kubenswrapper[4932]: I0321 10:22:24.552566 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5nmt" event={"ID":"b163dd17-31fd-4220-b2f6-a40933f692e0","Type":"ContainerStarted","Data":"202a640999032f808658f85767b345202c649585207a927d734965a386f705f4"} Mar 21 10:22:24 crc kubenswrapper[4932]: I0321 10:22:24.605159 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5nmt" podStartSLOduration=2.002030413 podStartE2EDuration="5.605131953s" podCreationTimestamp="2026-03-21 10:22:19 +0000 UTC" firstStartedPulling="2026-03-21 10:22:20.510369646 +0000 UTC m=+5044.105567915" lastFinishedPulling="2026-03-21 10:22:24.113471186 +0000 UTC m=+5047.708669455" observedRunningTime="2026-03-21 10:22:24.59187595 +0000 UTC m=+5048.187074239" watchObservedRunningTime="2026-03-21 10:22:24.605131953 +0000 UTC m=+5048.200330232" Mar 21 10:22:25 crc kubenswrapper[4932]: I0321 10:22:25.562678 4932 generic.go:334] "Generic (PLEG): container finished" podID="76922123-2551-4a45-804e-0465640f9fc3" containerID="b04985366092888088fd0a562d84c0d391944943d61caf3d827565134f59ae5f" exitCode=0 Mar 21 10:22:25 crc kubenswrapper[4932]: I0321 10:22:25.562759 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qq47m" event={"ID":"76922123-2551-4a45-804e-0465640f9fc3","Type":"ContainerDied","Data":"b04985366092888088fd0a562d84c0d391944943d61caf3d827565134f59ae5f"} Mar 21 10:22:26 crc kubenswrapper[4932]: I0321 10:22:26.573635 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qq47m" event={"ID":"76922123-2551-4a45-804e-0465640f9fc3","Type":"ContainerStarted","Data":"52f29a099d319dcfda3b79b442f5df5c1b3cf775b227cf0e90ad73a9a8152b14"} Mar 21 10:22:26 crc kubenswrapper[4932]: I0321 10:22:26.595658 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qq47m" podStartSLOduration=3.178068183 podStartE2EDuration="5.595642521s" podCreationTimestamp="2026-03-21 10:22:21 +0000 UTC" firstStartedPulling="2026-03-21 10:22:23.54059051 +0000 UTC m=+5047.135788779" lastFinishedPulling="2026-03-21 10:22:25.958164848 +0000 UTC m=+5049.553363117" observedRunningTime="2026-03-21 10:22:26.594363901 +0000 UTC m=+5050.189562190" watchObservedRunningTime="2026-03-21 10:22:26.595642521 +0000 UTC m=+5050.190840790" Mar 21 10:22:26 crc kubenswrapper[4932]: I0321 10:22:26.702724 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:22:26 crc kubenswrapper[4932]: E0321 10:22:26.702935 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:22:29 crc kubenswrapper[4932]: I0321 10:22:29.495462 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:29 crc kubenswrapper[4932]: I0321 10:22:29.496990 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:29 crc kubenswrapper[4932]: I0321 10:22:29.542860 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:29 crc kubenswrapper[4932]: I0321 10:22:29.647832 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:29 crc kubenswrapper[4932]: I0321 10:22:29.943629 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5nmt"] Mar 21 10:22:30 crc kubenswrapper[4932]: I0321 10:22:30.226156 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:22:30 crc kubenswrapper[4932]: I0321 10:22:30.226239 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:22:31 crc kubenswrapper[4932]: I0321 10:22:31.623340 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5nmt" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerName="registry-server" containerID="cri-o://202a640999032f808658f85767b345202c649585207a927d734965a386f705f4" gracePeriod=2 Mar 21 10:22:31 crc kubenswrapper[4932]: I0321 10:22:31.909010 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:31 crc kubenswrapper[4932]: I0321 10:22:31.909069 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.057523 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.260884 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.427791 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-catalog-content\") pod \"b163dd17-31fd-4220-b2f6-a40933f692e0\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.427952 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-utilities\") pod \"b163dd17-31fd-4220-b2f6-a40933f692e0\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.428130 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbxnn\" (UniqueName: \"kubernetes.io/projected/b163dd17-31fd-4220-b2f6-a40933f692e0-kube-api-access-rbxnn\") pod \"b163dd17-31fd-4220-b2f6-a40933f692e0\" (UID: \"b163dd17-31fd-4220-b2f6-a40933f692e0\") " Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.430022 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-utilities" (OuterVolumeSpecName: "utilities") pod "b163dd17-31fd-4220-b2f6-a40933f692e0" (UID: "b163dd17-31fd-4220-b2f6-a40933f692e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.441566 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b163dd17-31fd-4220-b2f6-a40933f692e0-kube-api-access-rbxnn" (OuterVolumeSpecName: "kube-api-access-rbxnn") pod "b163dd17-31fd-4220-b2f6-a40933f692e0" (UID: "b163dd17-31fd-4220-b2f6-a40933f692e0"). InnerVolumeSpecName "kube-api-access-rbxnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.458012 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b163dd17-31fd-4220-b2f6-a40933f692e0" (UID: "b163dd17-31fd-4220-b2f6-a40933f692e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.534149 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.534196 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbxnn\" (UniqueName: \"kubernetes.io/projected/b163dd17-31fd-4220-b2f6-a40933f692e0-kube-api-access-rbxnn\") on node \"crc\" DevicePath \"\"" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.534211 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b163dd17-31fd-4220-b2f6-a40933f692e0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.648744 4932 generic.go:334] "Generic (PLEG): container finished" podID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerID="202a640999032f808658f85767b345202c649585207a927d734965a386f705f4" exitCode=0 Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.649109 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5nmt" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.649436 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5nmt" event={"ID":"b163dd17-31fd-4220-b2f6-a40933f692e0","Type":"ContainerDied","Data":"202a640999032f808658f85767b345202c649585207a927d734965a386f705f4"} Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.649521 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5nmt" event={"ID":"b163dd17-31fd-4220-b2f6-a40933f692e0","Type":"ContainerDied","Data":"616362dd73f02aebbcb91f8948758bef20b0a959e883f21ce67936b21796e409"} Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.649543 4932 scope.go:117] "RemoveContainer" containerID="202a640999032f808658f85767b345202c649585207a927d734965a386f705f4" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.673950 4932 scope.go:117] "RemoveContainer" containerID="3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.709320 4932 scope.go:117] "RemoveContainer" containerID="fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.710056 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5nmt"] Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.727893 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5nmt"] Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.730062 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.757262 4932 scope.go:117] "RemoveContainer" containerID="202a640999032f808658f85767b345202c649585207a927d734965a386f705f4" Mar 21 10:22:32 crc kubenswrapper[4932]: E0321 10:22:32.758100 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202a640999032f808658f85767b345202c649585207a927d734965a386f705f4\": container with ID starting with 202a640999032f808658f85767b345202c649585207a927d734965a386f705f4 not found: ID does not exist" containerID="202a640999032f808658f85767b345202c649585207a927d734965a386f705f4" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.758147 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202a640999032f808658f85767b345202c649585207a927d734965a386f705f4"} err="failed to get container status \"202a640999032f808658f85767b345202c649585207a927d734965a386f705f4\": rpc error: code = NotFound desc = could not find container \"202a640999032f808658f85767b345202c649585207a927d734965a386f705f4\": container with ID starting with 202a640999032f808658f85767b345202c649585207a927d734965a386f705f4 not found: ID does not exist" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.758168 4932 scope.go:117] "RemoveContainer" containerID="3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071" Mar 21 10:22:32 crc kubenswrapper[4932]: E0321 10:22:32.758507 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071\": container with ID starting with 3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071 not found: ID does not exist" containerID="3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.758551 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071"} err="failed to get container status \"3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071\": rpc error: code = NotFound desc = could not find container \"3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071\": container with ID starting with 3b3c92f5e8e3d38789df0d2a2f9d974cdfd3edaf15a5a4abe9483e152a9a6071 not found: ID does not exist" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.758583 4932 scope.go:117] "RemoveContainer" containerID="fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22" Mar 21 10:22:32 crc kubenswrapper[4932]: E0321 10:22:32.758872 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22\": container with ID starting with fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22 not found: ID does not exist" containerID="fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22" Mar 21 10:22:32 crc kubenswrapper[4932]: I0321 10:22:32.758961 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22"} err="failed to get container status \"fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22\": rpc error: code = NotFound desc = could not find container \"fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22\": container with ID starting with fb2577db174c36e8cd71328336994c8b0c8f6b4701a2afed2507d788e55c9f22 not found: ID does not exist" Mar 21 10:22:33 crc kubenswrapper[4932]: I0321 10:22:33.719594 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" path="/var/lib/kubelet/pods/b163dd17-31fd-4220-b2f6-a40933f692e0/volumes" Mar 21 10:22:34 crc kubenswrapper[4932]: I0321 10:22:34.522173 4932 scope.go:117] "RemoveContainer" containerID="7ea534fc43e058478d7d2c0092c395c3bba32e6ce3001790806da9cec6a7128d" Mar 21 10:22:35 crc kubenswrapper[4932]: I0321 10:22:35.543758 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qq47m"] Mar 21 10:22:35 crc kubenswrapper[4932]: I0321 10:22:35.544262 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qq47m" podUID="76922123-2551-4a45-804e-0465640f9fc3" containerName="registry-server" containerID="cri-o://52f29a099d319dcfda3b79b442f5df5c1b3cf775b227cf0e90ad73a9a8152b14" gracePeriod=2 Mar 21 10:22:35 crc kubenswrapper[4932]: I0321 10:22:35.677538 4932 generic.go:334] "Generic (PLEG): container finished" podID="76922123-2551-4a45-804e-0465640f9fc3" containerID="52f29a099d319dcfda3b79b442f5df5c1b3cf775b227cf0e90ad73a9a8152b14" exitCode=0 Mar 21 10:22:35 crc kubenswrapper[4932]: I0321 10:22:35.677577 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qq47m" event={"ID":"76922123-2551-4a45-804e-0465640f9fc3","Type":"ContainerDied","Data":"52f29a099d319dcfda3b79b442f5df5c1b3cf775b227cf0e90ad73a9a8152b14"} Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.043522 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.207814 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-utilities\") pod \"76922123-2551-4a45-804e-0465640f9fc3\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.207872 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-catalog-content\") pod \"76922123-2551-4a45-804e-0465640f9fc3\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.208197 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zpzn\" (UniqueName: \"kubernetes.io/projected/76922123-2551-4a45-804e-0465640f9fc3-kube-api-access-8zpzn\") pod \"76922123-2551-4a45-804e-0465640f9fc3\" (UID: \"76922123-2551-4a45-804e-0465640f9fc3\") " Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.208970 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-utilities" (OuterVolumeSpecName: "utilities") pod "76922123-2551-4a45-804e-0465640f9fc3" (UID: "76922123-2551-4a45-804e-0465640f9fc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.227391 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76922123-2551-4a45-804e-0465640f9fc3-kube-api-access-8zpzn" (OuterVolumeSpecName: "kube-api-access-8zpzn") pod "76922123-2551-4a45-804e-0465640f9fc3" (UID: "76922123-2551-4a45-804e-0465640f9fc3"). InnerVolumeSpecName "kube-api-access-8zpzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.260135 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76922123-2551-4a45-804e-0465640f9fc3" (UID: "76922123-2551-4a45-804e-0465640f9fc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.310868 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zpzn\" (UniqueName: \"kubernetes.io/projected/76922123-2551-4a45-804e-0465640f9fc3-kube-api-access-8zpzn\") on node \"crc\" DevicePath \"\"" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.310903 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.310915 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76922123-2551-4a45-804e-0465640f9fc3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.688284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qq47m" event={"ID":"76922123-2551-4a45-804e-0465640f9fc3","Type":"ContainerDied","Data":"9ecef1850abff45b492f2e21d307ce0622f6b8cc8158e91686ffe197e68d63fb"} Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.688598 4932 scope.go:117] "RemoveContainer" containerID="52f29a099d319dcfda3b79b442f5df5c1b3cf775b227cf0e90ad73a9a8152b14" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.688380 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qq47m" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.703358 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:22:36 crc kubenswrapper[4932]: E0321 10:22:36.703889 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.704246 4932 scope.go:117] "RemoveContainer" containerID="b04985366092888088fd0a562d84c0d391944943d61caf3d827565134f59ae5f" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.734236 4932 scope.go:117] "RemoveContainer" containerID="f4996536a381cc59bb346de7815688afb60f23fde2b2e72c5baed584d91f4c4c" Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.734890 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qq47m"] Mar 21 10:22:36 crc kubenswrapper[4932]: I0321 10:22:36.743740 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qq47m"] Mar 21 10:22:37 crc kubenswrapper[4932]: I0321 10:22:37.713851 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76922123-2551-4a45-804e-0465640f9fc3" path="/var/lib/kubelet/pods/76922123-2551-4a45-804e-0465640f9fc3/volumes" Mar 21 10:22:41 crc kubenswrapper[4932]: I0321 10:22:41.703810 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:22:41 crc kubenswrapper[4932]: E0321 10:22:41.705131 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:22:47 crc kubenswrapper[4932]: I0321 10:22:47.703372 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:22:47 crc kubenswrapper[4932]: E0321 10:22:47.704088 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:22:54 crc kubenswrapper[4932]: I0321 10:22:54.703034 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:22:54 crc kubenswrapper[4932]: E0321 10:22:54.703908 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:23:00 crc kubenswrapper[4932]: I0321 10:23:00.225673 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:23:00 crc kubenswrapper[4932]: I0321 10:23:00.225999 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:23:02 crc kubenswrapper[4932]: I0321 10:23:02.703249 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:23:02 crc kubenswrapper[4932]: E0321 10:23:02.703590 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:23:06 crc kubenswrapper[4932]: I0321 10:23:06.703014 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:23:06 crc kubenswrapper[4932]: E0321 10:23:06.705071 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:23:16 crc kubenswrapper[4932]: I0321 10:23:16.703032 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:23:16 crc kubenswrapper[4932]: E0321 10:23:16.703724 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:23:20 crc kubenswrapper[4932]: I0321 10:23:20.702383 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:23:20 crc kubenswrapper[4932]: E0321 10:23:20.703387 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:23:27 crc kubenswrapper[4932]: I0321 10:23:27.709525 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:23:27 crc kubenswrapper[4932]: E0321 10:23:27.710301 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:23:28 crc kubenswrapper[4932]: I0321 10:23:28.169434 4932 generic.go:334] "Generic (PLEG): container finished" podID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerID="32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d" exitCode=0 Mar 21 10:23:28 crc kubenswrapper[4932]: I0321 10:23:28.169488 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" event={"ID":"0f2bc9c7-c804-4108-a92e-f35274d0da17","Type":"ContainerDied","Data":"32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d"} Mar 21 10:23:28 crc kubenswrapper[4932]: I0321 10:23:28.170608 4932 scope.go:117] "RemoveContainer" containerID="32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d" Mar 21 10:23:28 crc kubenswrapper[4932]: I0321 10:23:28.540328 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mpdqp_must-gather-5t2qn_0f2bc9c7-c804-4108-a92e-f35274d0da17/gather/0.log" Mar 21 10:23:30 crc kubenswrapper[4932]: I0321 10:23:30.225292 4932 patch_prober.go:28] interesting pod/machine-config-daemon-m4n7b container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 10:23:30 crc kubenswrapper[4932]: I0321 10:23:30.225825 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 10:23:30 crc kubenswrapper[4932]: I0321 10:23:30.225862 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" Mar 21 10:23:30 crc kubenswrapper[4932]: I0321 10:23:30.226574 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8"} pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 10:23:30 crc kubenswrapper[4932]: I0321 10:23:30.226625 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" containerName="machine-config-daemon" containerID="cri-o://f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" gracePeriod=600 Mar 21 10:23:30 crc kubenswrapper[4932]: E0321 10:23:30.556648 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:23:31 crc kubenswrapper[4932]: I0321 10:23:31.197915 4932 generic.go:334] "Generic (PLEG): container finished" podID="8044dc63-0327-41d4-93fe-af2287271a84" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" exitCode=0 Mar 21 10:23:31 crc kubenswrapper[4932]: I0321 10:23:31.197991 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerDied","Data":"f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8"} Mar 21 10:23:31 crc kubenswrapper[4932]: I0321 10:23:31.198272 4932 scope.go:117] "RemoveContainer" containerID="8fec860383151efbde4fb71b965ba9220d9ab3cfffbe09f8942e6ba8572764e4" Mar 21 10:23:31 crc kubenswrapper[4932]: I0321 10:23:31.199012 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:23:31 crc kubenswrapper[4932]: E0321 10:23:31.199388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:23:32 crc kubenswrapper[4932]: I0321 10:23:32.702558 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:23:32 crc kubenswrapper[4932]: E0321 10:23:32.702880 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:23:36 crc kubenswrapper[4932]: I0321 10:23:36.696996 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mpdqp/must-gather-5t2qn"] Mar 21 10:23:36 crc kubenswrapper[4932]: I0321 10:23:36.697707 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" podUID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerName="copy" containerID="cri-o://abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81" gracePeriod=2 Mar 21 10:23:36 crc kubenswrapper[4932]: I0321 10:23:36.704588 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mpdqp/must-gather-5t2qn"] Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.176237 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mpdqp_must-gather-5t2qn_0f2bc9c7-c804-4108-a92e-f35274d0da17/copy/0.log" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.177046 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.256295 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mpdqp_must-gather-5t2qn_0f2bc9c7-c804-4108-a92e-f35274d0da17/copy/0.log" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.257143 4932 generic.go:334] "Generic (PLEG): container finished" podID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerID="abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81" exitCode=143 Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.257205 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mpdqp/must-gather-5t2qn" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.257211 4932 scope.go:117] "RemoveContainer" containerID="abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.284670 4932 scope.go:117] "RemoveContainer" containerID="32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.289848 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df9v8\" (UniqueName: \"kubernetes.io/projected/0f2bc9c7-c804-4108-a92e-f35274d0da17-kube-api-access-df9v8\") pod \"0f2bc9c7-c804-4108-a92e-f35274d0da17\" (UID: \"0f2bc9c7-c804-4108-a92e-f35274d0da17\") " Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.290157 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f2bc9c7-c804-4108-a92e-f35274d0da17-must-gather-output\") pod \"0f2bc9c7-c804-4108-a92e-f35274d0da17\" (UID: \"0f2bc9c7-c804-4108-a92e-f35274d0da17\") " Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.301639 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2bc9c7-c804-4108-a92e-f35274d0da17-kube-api-access-df9v8" (OuterVolumeSpecName: "kube-api-access-df9v8") pod "0f2bc9c7-c804-4108-a92e-f35274d0da17" (UID: "0f2bc9c7-c804-4108-a92e-f35274d0da17"). InnerVolumeSpecName "kube-api-access-df9v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.393212 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df9v8\" (UniqueName: \"kubernetes.io/projected/0f2bc9c7-c804-4108-a92e-f35274d0da17-kube-api-access-df9v8\") on node \"crc\" DevicePath \"\"" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.420987 4932 scope.go:117] "RemoveContainer" containerID="abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81" Mar 21 10:23:37 crc kubenswrapper[4932]: E0321 10:23:37.421721 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81\": container with ID starting with abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81 not found: ID does not exist" containerID="abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.421770 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81"} err="failed to get container status \"abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81\": rpc error: code = NotFound desc = could not find container \"abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81\": container with ID starting with abad1ebffaf9fac5e95e2809def62107b310d5a990c56873aa26492af1e92c81 not found: ID does not exist" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.421802 4932 scope.go:117] "RemoveContainer" containerID="32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d" Mar 21 10:23:37 crc kubenswrapper[4932]: E0321 10:23:37.422216 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d\": container with ID starting with 32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d not found: ID does not exist" containerID="32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.422254 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d"} err="failed to get container status \"32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d\": rpc error: code = NotFound desc = could not find container \"32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d\": container with ID starting with 32c61ca851c639b1e5caeb90b3a54678c759d066139226e9a75866e1aa5b264d not found: ID does not exist" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.492952 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f2bc9c7-c804-4108-a92e-f35274d0da17-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0f2bc9c7-c804-4108-a92e-f35274d0da17" (UID: "0f2bc9c7-c804-4108-a92e-f35274d0da17"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.494749 4932 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0f2bc9c7-c804-4108-a92e-f35274d0da17-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 21 10:23:37 crc kubenswrapper[4932]: I0321 10:23:37.712987 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2bc9c7-c804-4108-a92e-f35274d0da17" path="/var/lib/kubelet/pods/0f2bc9c7-c804-4108-a92e-f35274d0da17/volumes" Mar 21 10:23:40 crc kubenswrapper[4932]: I0321 10:23:40.702190 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:23:41 crc kubenswrapper[4932]: I0321 10:23:41.295307 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166"} Mar 21 10:23:45 crc kubenswrapper[4932]: I0321 10:23:45.702284 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:23:45 crc kubenswrapper[4932]: E0321 10:23:45.702884 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:23:46 crc kubenswrapper[4932]: I0321 10:23:46.702827 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:23:46 crc kubenswrapper[4932]: E0321 10:23:46.703141 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:23:47 crc kubenswrapper[4932]: I0321 10:23:47.741104 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:23:47 crc kubenswrapper[4932]: I0321 10:23:47.741397 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:23:49 crc kubenswrapper[4932]: I0321 10:23:49.362514 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" exitCode=1 Mar 21 10:23:49 crc kubenswrapper[4932]: I0321 10:23:49.362542 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166"} Mar 21 10:23:49 crc kubenswrapper[4932]: I0321 10:23:49.362779 4932 scope.go:117] "RemoveContainer" containerID="145e89a9dec2ea2e49bd655fe252c474c0ca5efbb5c15a5b3a020bf412786c66" Mar 21 10:23:49 crc kubenswrapper[4932]: I0321 10:23:49.363572 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:23:49 crc kubenswrapper[4932]: E0321 10:23:49.363847 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:23:56 crc kubenswrapper[4932]: I0321 10:23:56.702470 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:23:56 crc kubenswrapper[4932]: E0321 10:23:56.703137 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:23:57 crc kubenswrapper[4932]: I0321 10:23:57.740435 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:23:57 crc kubenswrapper[4932]: I0321 10:23:57.741020 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:23:57 crc kubenswrapper[4932]: I0321 10:23:57.741959 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:23:57 crc kubenswrapper[4932]: E0321 10:23:57.742199 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.142141 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568144-ndnp4"] Mar 21 10:24:00 crc kubenswrapper[4932]: E0321 10:24:00.143235 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76922123-2551-4a45-804e-0465640f9fc3" containerName="extract-utilities" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143253 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="76922123-2551-4a45-804e-0465640f9fc3" containerName="extract-utilities" Mar 21 10:24:00 crc kubenswrapper[4932]: E0321 10:24:00.143275 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerName="gather" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143284 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerName="gather" Mar 21 10:24:00 crc kubenswrapper[4932]: E0321 10:24:00.143292 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerName="registry-server" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143300 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerName="registry-server" Mar 21 10:24:00 crc kubenswrapper[4932]: E0321 10:24:00.143329 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76922123-2551-4a45-804e-0465640f9fc3" containerName="registry-server" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143336 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="76922123-2551-4a45-804e-0465640f9fc3" containerName="registry-server" Mar 21 10:24:00 crc kubenswrapper[4932]: E0321 10:24:00.143373 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerName="extract-utilities" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143382 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerName="extract-utilities" Mar 21 10:24:00 crc kubenswrapper[4932]: E0321 10:24:00.143409 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerName="extract-content" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143417 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerName="extract-content" Mar 21 10:24:00 crc kubenswrapper[4932]: E0321 10:24:00.143429 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76922123-2551-4a45-804e-0465640f9fc3" containerName="extract-content" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143435 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="76922123-2551-4a45-804e-0465640f9fc3" containerName="extract-content" Mar 21 10:24:00 crc kubenswrapper[4932]: E0321 10:24:00.143488 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerName="copy" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143496 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerName="copy" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143701 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerName="copy" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143722 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b163dd17-31fd-4220-b2f6-a40933f692e0" containerName="registry-server" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143730 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="76922123-2551-4a45-804e-0465640f9fc3" containerName="registry-server" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.143746 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2bc9c7-c804-4108-a92e-f35274d0da17" containerName="gather" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.144442 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568144-ndnp4" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.147383 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.148300 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.150333 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.167598 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568144-ndnp4"] Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.242417 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqf6k\" (UniqueName: \"kubernetes.io/projected/c99dfad0-5371-42c9-bc49-3ad291f09825-kube-api-access-jqf6k\") pod \"auto-csr-approver-29568144-ndnp4\" (UID: \"c99dfad0-5371-42c9-bc49-3ad291f09825\") " pod="openshift-infra/auto-csr-approver-29568144-ndnp4" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.344898 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqf6k\" (UniqueName: \"kubernetes.io/projected/c99dfad0-5371-42c9-bc49-3ad291f09825-kube-api-access-jqf6k\") pod \"auto-csr-approver-29568144-ndnp4\" (UID: \"c99dfad0-5371-42c9-bc49-3ad291f09825\") " pod="openshift-infra/auto-csr-approver-29568144-ndnp4" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.366341 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqf6k\" (UniqueName: \"kubernetes.io/projected/c99dfad0-5371-42c9-bc49-3ad291f09825-kube-api-access-jqf6k\") pod \"auto-csr-approver-29568144-ndnp4\" (UID: \"c99dfad0-5371-42c9-bc49-3ad291f09825\") " pod="openshift-infra/auto-csr-approver-29568144-ndnp4" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.477019 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568144-ndnp4" Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.936984 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568144-ndnp4"] Mar 21 10:24:00 crc kubenswrapper[4932]: W0321 10:24:00.944520 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc99dfad0_5371_42c9_bc49_3ad291f09825.slice/crio-0ac0c8b9c4fc2acf934e7b0f471da759314c09606554f9b454b83ccaed9f8f34 WatchSource:0}: Error finding container 0ac0c8b9c4fc2acf934e7b0f471da759314c09606554f9b454b83ccaed9f8f34: Status 404 returned error can't find the container with id 0ac0c8b9c4fc2acf934e7b0f471da759314c09606554f9b454b83ccaed9f8f34 Mar 21 10:24:00 crc kubenswrapper[4932]: I0321 10:24:00.947381 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 10:24:01 crc kubenswrapper[4932]: I0321 10:24:01.472462 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568144-ndnp4" event={"ID":"c99dfad0-5371-42c9-bc49-3ad291f09825","Type":"ContainerStarted","Data":"0ac0c8b9c4fc2acf934e7b0f471da759314c09606554f9b454b83ccaed9f8f34"} Mar 21 10:24:01 crc kubenswrapper[4932]: I0321 10:24:01.702665 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:24:02 crc kubenswrapper[4932]: I0321 10:24:02.490930 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerStarted","Data":"8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff"} Mar 21 10:24:03 crc kubenswrapper[4932]: I0321 10:24:03.504371 4932 generic.go:334] "Generic (PLEG): container finished" podID="c99dfad0-5371-42c9-bc49-3ad291f09825" containerID="148517b9e587347d221e6836634e81f1213ad39c21cfd364290e627fb00379e7" exitCode=0 Mar 21 10:24:03 crc kubenswrapper[4932]: I0321 10:24:03.504591 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568144-ndnp4" event={"ID":"c99dfad0-5371-42c9-bc49-3ad291f09825","Type":"ContainerDied","Data":"148517b9e587347d221e6836634e81f1213ad39c21cfd364290e627fb00379e7"} Mar 21 10:24:04 crc kubenswrapper[4932]: I0321 10:24:04.852018 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568144-ndnp4" Mar 21 10:24:04 crc kubenswrapper[4932]: I0321 10:24:04.960301 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqf6k\" (UniqueName: \"kubernetes.io/projected/c99dfad0-5371-42c9-bc49-3ad291f09825-kube-api-access-jqf6k\") pod \"c99dfad0-5371-42c9-bc49-3ad291f09825\" (UID: \"c99dfad0-5371-42c9-bc49-3ad291f09825\") " Mar 21 10:24:04 crc kubenswrapper[4932]: I0321 10:24:04.968611 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99dfad0-5371-42c9-bc49-3ad291f09825-kube-api-access-jqf6k" (OuterVolumeSpecName: "kube-api-access-jqf6k") pod "c99dfad0-5371-42c9-bc49-3ad291f09825" (UID: "c99dfad0-5371-42c9-bc49-3ad291f09825"). InnerVolumeSpecName "kube-api-access-jqf6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:24:05 crc kubenswrapper[4932]: I0321 10:24:05.062964 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqf6k\" (UniqueName: \"kubernetes.io/projected/c99dfad0-5371-42c9-bc49-3ad291f09825-kube-api-access-jqf6k\") on node \"crc\" DevicePath \"\"" Mar 21 10:24:05 crc kubenswrapper[4932]: I0321 10:24:05.523499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568144-ndnp4" event={"ID":"c99dfad0-5371-42c9-bc49-3ad291f09825","Type":"ContainerDied","Data":"0ac0c8b9c4fc2acf934e7b0f471da759314c09606554f9b454b83ccaed9f8f34"} Mar 21 10:24:05 crc kubenswrapper[4932]: I0321 10:24:05.523537 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac0c8b9c4fc2acf934e7b0f471da759314c09606554f9b454b83ccaed9f8f34" Mar 21 10:24:05 crc kubenswrapper[4932]: I0321 10:24:05.523592 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568144-ndnp4" Mar 21 10:24:05 crc kubenswrapper[4932]: I0321 10:24:05.936028 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568138-dhdzm"] Mar 21 10:24:05 crc kubenswrapper[4932]: I0321 10:24:05.946845 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568138-dhdzm"] Mar 21 10:24:07 crc kubenswrapper[4932]: I0321 10:24:07.724146 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2737adb-00c1-4d5e-8bc1-026b03a5ff75" path="/var/lib/kubelet/pods/f2737adb-00c1-4d5e-8bc1-026b03a5ff75/volumes" Mar 21 10:24:07 crc kubenswrapper[4932]: I0321 10:24:07.948534 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:24:07 crc kubenswrapper[4932]: I0321 10:24:07.948585 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:24:10 crc kubenswrapper[4932]: I0321 10:24:10.702704 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:24:10 crc kubenswrapper[4932]: E0321 10:24:10.703297 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:24:11 crc kubenswrapper[4932]: I0321 10:24:11.584842 4932 generic.go:334] "Generic (PLEG): container finished" podID="a2137f88-2dc2-4718-bd8d-229745974b9a" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" exitCode=1 Mar 21 10:24:11 crc kubenswrapper[4932]: I0321 10:24:11.584884 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7998c44c8d-kb65g" event={"ID":"a2137f88-2dc2-4718-bd8d-229745974b9a","Type":"ContainerDied","Data":"8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff"} Mar 21 10:24:11 crc kubenswrapper[4932]: I0321 10:24:11.584917 4932 scope.go:117] "RemoveContainer" containerID="8e9d87375016f7b211523ed5b625a51e0132bb62b226ce67fd087049c38a00dc" Mar 21 10:24:11 crc kubenswrapper[4932]: I0321 10:24:11.585705 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:24:11 crc kubenswrapper[4932]: E0321 10:24:11.586020 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:24:11 crc kubenswrapper[4932]: I0321 10:24:11.702721 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:24:11 crc kubenswrapper[4932]: E0321 10:24:11.702976 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:24:17 crc kubenswrapper[4932]: I0321 10:24:17.948203 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:24:17 crc kubenswrapper[4932]: I0321 10:24:17.949228 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7998c44c8d-kb65g" Mar 21 10:24:17 crc kubenswrapper[4932]: I0321 10:24:17.950588 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:24:17 crc kubenswrapper[4932]: E0321 10:24:17.951051 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:24:23 crc kubenswrapper[4932]: I0321 10:24:23.703287 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:24:23 crc kubenswrapper[4932]: E0321 10:24:23.704043 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:24:25 crc kubenswrapper[4932]: I0321 10:24:25.702999 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:24:25 crc kubenswrapper[4932]: E0321 10:24:25.703538 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:24:29 crc kubenswrapper[4932]: I0321 10:24:29.702540 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:24:29 crc kubenswrapper[4932]: E0321 10:24:29.704169 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:24:34 crc kubenswrapper[4932]: I0321 10:24:34.657475 4932 scope.go:117] "RemoveContainer" containerID="535b37d3d2df11aa68875e3bc842a7df8dbb8e717a529e0c8c9c2f943c8a31e2" Mar 21 10:24:34 crc kubenswrapper[4932]: I0321 10:24:34.698409 4932 scope.go:117] "RemoveContainer" containerID="8e62d0a0b6db3efc7a21df243e36a500dd76b1e60c9091524de9ccec3a0fdc9a" Mar 21 10:24:37 crc kubenswrapper[4932]: I0321 10:24:37.709107 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:24:37 crc kubenswrapper[4932]: E0321 10:24:37.709874 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:24:37 crc kubenswrapper[4932]: I0321 10:24:37.709917 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:24:37 crc kubenswrapper[4932]: E0321 10:24:37.710201 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:24:44 crc kubenswrapper[4932]: I0321 10:24:44.702295 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:24:44 crc kubenswrapper[4932]: E0321 10:24:44.702790 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:24:49 crc kubenswrapper[4932]: I0321 10:24:49.702557 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:24:49 crc kubenswrapper[4932]: E0321 10:24:49.703562 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:24:50 crc kubenswrapper[4932]: I0321 10:24:50.703879 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:24:50 crc kubenswrapper[4932]: E0321 10:24:50.704789 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:24:58 crc kubenswrapper[4932]: I0321 10:24:58.702770 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:24:58 crc kubenswrapper[4932]: E0321 10:24:58.703469 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:25:01 crc kubenswrapper[4932]: I0321 10:25:01.703889 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:25:01 crc kubenswrapper[4932]: E0321 10:25:01.704569 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:25:02 crc kubenswrapper[4932]: I0321 10:25:02.702311 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:25:02 crc kubenswrapper[4932]: E0321 10:25:02.702572 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:25:10 crc kubenswrapper[4932]: I0321 10:25:10.702734 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:25:10 crc kubenswrapper[4932]: E0321 10:25:10.703546 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:25:14 crc kubenswrapper[4932]: I0321 10:25:14.702586 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:25:14 crc kubenswrapper[4932]: E0321 10:25:14.703457 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:25:16 crc kubenswrapper[4932]: I0321 10:25:16.702724 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:25:16 crc kubenswrapper[4932]: E0321 10:25:16.703212 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:25:23 crc kubenswrapper[4932]: I0321 10:25:23.703050 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:25:23 crc kubenswrapper[4932]: E0321 10:25:23.705015 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:25:29 crc kubenswrapper[4932]: I0321 10:25:29.702069 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:25:29 crc kubenswrapper[4932]: E0321 10:25:29.702615 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:25:31 crc kubenswrapper[4932]: I0321 10:25:31.702912 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:25:31 crc kubenswrapper[4932]: E0321 10:25:31.703763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:25:36 crc kubenswrapper[4932]: I0321 10:25:36.703221 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:25:36 crc kubenswrapper[4932]: E0321 10:25:36.704143 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:25:44 crc kubenswrapper[4932]: I0321 10:25:44.703168 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:25:44 crc kubenswrapper[4932]: I0321 10:25:44.703905 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:25:44 crc kubenswrapper[4932]: E0321 10:25:44.704148 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:25:44 crc kubenswrapper[4932]: E0321 10:25:44.704209 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:25:49 crc kubenswrapper[4932]: I0321 10:25:49.702257 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:25:49 crc kubenswrapper[4932]: E0321 10:25:49.703002 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:25:57 crc kubenswrapper[4932]: I0321 10:25:57.709503 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:25:57 crc kubenswrapper[4932]: E0321 10:25:57.710279 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:25:58 crc kubenswrapper[4932]: I0321 10:25:58.702985 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:25:58 crc kubenswrapper[4932]: E0321 10:25:58.703231 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.145072 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568146-7fw2v"] Mar 21 10:26:00 crc kubenswrapper[4932]: E0321 10:26:00.145860 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99dfad0-5371-42c9-bc49-3ad291f09825" containerName="oc" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.145876 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99dfad0-5371-42c9-bc49-3ad291f09825" containerName="oc" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.146121 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99dfad0-5371-42c9-bc49-3ad291f09825" containerName="oc" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.146911 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568146-7fw2v" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.149113 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.149417 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.149998 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.162642 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568146-7fw2v"] Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.237175 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxqtj\" (UniqueName: \"kubernetes.io/projected/26696c42-bd40-4b07-8946-e9d5c9de291d-kube-api-access-qxqtj\") pod \"auto-csr-approver-29568146-7fw2v\" (UID: \"26696c42-bd40-4b07-8946-e9d5c9de291d\") " pod="openshift-infra/auto-csr-approver-29568146-7fw2v" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.339238 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxqtj\" (UniqueName: \"kubernetes.io/projected/26696c42-bd40-4b07-8946-e9d5c9de291d-kube-api-access-qxqtj\") pod \"auto-csr-approver-29568146-7fw2v\" (UID: \"26696c42-bd40-4b07-8946-e9d5c9de291d\") " pod="openshift-infra/auto-csr-approver-29568146-7fw2v" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.358207 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxqtj\" (UniqueName: \"kubernetes.io/projected/26696c42-bd40-4b07-8946-e9d5c9de291d-kube-api-access-qxqtj\") pod \"auto-csr-approver-29568146-7fw2v\" (UID: \"26696c42-bd40-4b07-8946-e9d5c9de291d\") " pod="openshift-infra/auto-csr-approver-29568146-7fw2v" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.467896 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568146-7fw2v" Mar 21 10:26:00 crc kubenswrapper[4932]: I0321 10:26:00.898733 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568146-7fw2v"] Mar 21 10:26:01 crc kubenswrapper[4932]: I0321 10:26:01.563056 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568146-7fw2v" event={"ID":"26696c42-bd40-4b07-8946-e9d5c9de291d","Type":"ContainerStarted","Data":"c2ae75230dbf6e62da3ee96041762a99565ac41c9baf07c3cf92e4276ec62c34"} Mar 21 10:26:02 crc kubenswrapper[4932]: I0321 10:26:02.585979 4932 generic.go:334] "Generic (PLEG): container finished" podID="26696c42-bd40-4b07-8946-e9d5c9de291d" containerID="39f3d781ccf69f4ba3e3ed39acc1d9253415c21d81a149c10d119a502f7c35b9" exitCode=0 Mar 21 10:26:02 crc kubenswrapper[4932]: I0321 10:26:02.586479 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568146-7fw2v" event={"ID":"26696c42-bd40-4b07-8946-e9d5c9de291d","Type":"ContainerDied","Data":"39f3d781ccf69f4ba3e3ed39acc1d9253415c21d81a149c10d119a502f7c35b9"} Mar 21 10:26:03 crc kubenswrapper[4932]: I0321 10:26:03.703130 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:26:03 crc kubenswrapper[4932]: E0321 10:26:03.703694 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:26:03 crc kubenswrapper[4932]: I0321 10:26:03.972328 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568146-7fw2v" Mar 21 10:26:04 crc kubenswrapper[4932]: I0321 10:26:04.040741 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxqtj\" (UniqueName: \"kubernetes.io/projected/26696c42-bd40-4b07-8946-e9d5c9de291d-kube-api-access-qxqtj\") pod \"26696c42-bd40-4b07-8946-e9d5c9de291d\" (UID: \"26696c42-bd40-4b07-8946-e9d5c9de291d\") " Mar 21 10:26:04 crc kubenswrapper[4932]: I0321 10:26:04.048934 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26696c42-bd40-4b07-8946-e9d5c9de291d-kube-api-access-qxqtj" (OuterVolumeSpecName: "kube-api-access-qxqtj") pod "26696c42-bd40-4b07-8946-e9d5c9de291d" (UID: "26696c42-bd40-4b07-8946-e9d5c9de291d"). InnerVolumeSpecName "kube-api-access-qxqtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:26:04 crc kubenswrapper[4932]: I0321 10:26:04.143727 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxqtj\" (UniqueName: \"kubernetes.io/projected/26696c42-bd40-4b07-8946-e9d5c9de291d-kube-api-access-qxqtj\") on node \"crc\" DevicePath \"\"" Mar 21 10:26:04 crc kubenswrapper[4932]: I0321 10:26:04.603178 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568146-7fw2v" event={"ID":"26696c42-bd40-4b07-8946-e9d5c9de291d","Type":"ContainerDied","Data":"c2ae75230dbf6e62da3ee96041762a99565ac41c9baf07c3cf92e4276ec62c34"} Mar 21 10:26:04 crc kubenswrapper[4932]: I0321 10:26:04.603221 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ae75230dbf6e62da3ee96041762a99565ac41c9baf07c3cf92e4276ec62c34" Mar 21 10:26:04 crc kubenswrapper[4932]: I0321 10:26:04.603610 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568146-7fw2v" Mar 21 10:26:05 crc kubenswrapper[4932]: I0321 10:26:05.044995 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568140-555nb"] Mar 21 10:26:05 crc kubenswrapper[4932]: I0321 10:26:05.052270 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568140-555nb"] Mar 21 10:26:05 crc kubenswrapper[4932]: I0321 10:26:05.714779 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa70f328-7d68-4b42-bef3-f28be74a015c" path="/var/lib/kubelet/pods/fa70f328-7d68-4b42-bef3-f28be74a015c/volumes" Mar 21 10:26:09 crc kubenswrapper[4932]: I0321 10:26:09.702554 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:26:09 crc kubenswrapper[4932]: E0321 10:26:09.703265 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:26:12 crc kubenswrapper[4932]: I0321 10:26:12.703275 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:26:12 crc kubenswrapper[4932]: E0321 10:26:12.704177 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:26:17 crc kubenswrapper[4932]: I0321 10:26:17.726971 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:26:17 crc kubenswrapper[4932]: E0321 10:26:17.730294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:26:22 crc kubenswrapper[4932]: I0321 10:26:22.702416 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:26:22 crc kubenswrapper[4932]: E0321 10:26:22.703169 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:26:25 crc kubenswrapper[4932]: I0321 10:26:25.703108 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:26:25 crc kubenswrapper[4932]: E0321 10:26:25.704173 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:26:30 crc kubenswrapper[4932]: I0321 10:26:30.704756 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:26:30 crc kubenswrapper[4932]: E0321 10:26:30.706179 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:26:34 crc kubenswrapper[4932]: I0321 10:26:34.808339 4932 scope.go:117] "RemoveContainer" containerID="8a2c1ec241cd078db91212a59bd84d062968b7d75bfe97b0050c6871d24bd917" Mar 21 10:26:35 crc kubenswrapper[4932]: I0321 10:26:35.703209 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:26:35 crc kubenswrapper[4932]: E0321 10:26:35.703906 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:26:39 crc kubenswrapper[4932]: I0321 10:26:39.774594 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:26:39 crc kubenswrapper[4932]: E0321 10:26:39.775234 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:26:41 crc kubenswrapper[4932]: I0321 10:26:41.703038 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:26:41 crc kubenswrapper[4932]: E0321 10:26:41.703547 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:26:46 crc kubenswrapper[4932]: I0321 10:26:46.703724 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:26:46 crc kubenswrapper[4932]: E0321 10:26:46.705388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:26:51 crc kubenswrapper[4932]: I0321 10:26:51.702613 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:26:51 crc kubenswrapper[4932]: E0321 10:26:51.703550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:26:54 crc kubenswrapper[4932]: I0321 10:26:54.702620 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:26:54 crc kubenswrapper[4932]: E0321 10:26:54.703252 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:26:57 crc kubenswrapper[4932]: I0321 10:26:57.708646 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:26:57 crc kubenswrapper[4932]: E0321 10:26:57.709249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:27:05 crc kubenswrapper[4932]: I0321 10:27:05.703236 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:27:05 crc kubenswrapper[4932]: E0321 10:27:05.703826 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:27:09 crc kubenswrapper[4932]: I0321 10:27:09.702882 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:27:09 crc kubenswrapper[4932]: E0321 10:27:09.703915 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:27:12 crc kubenswrapper[4932]: I0321 10:27:12.702593 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:27:12 crc kubenswrapper[4932]: E0321 10:27:12.703657 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:27:17 crc kubenswrapper[4932]: I0321 10:27:17.710097 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:27:17 crc kubenswrapper[4932]: E0321 10:27:17.711194 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:27:21 crc kubenswrapper[4932]: I0321 10:27:21.702751 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:27:21 crc kubenswrapper[4932]: E0321 10:27:21.703279 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:27:24 crc kubenswrapper[4932]: I0321 10:27:24.703010 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:27:24 crc kubenswrapper[4932]: E0321 10:27:24.703948 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:27:31 crc kubenswrapper[4932]: I0321 10:27:31.702121 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:27:31 crc kubenswrapper[4932]: E0321 10:27:31.703278 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:27:33 crc kubenswrapper[4932]: I0321 10:27:33.702524 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:27:33 crc kubenswrapper[4932]: E0321 10:27:33.703093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:27:37 crc kubenswrapper[4932]: I0321 10:27:37.743923 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:27:37 crc kubenswrapper[4932]: E0321 10:27:37.744873 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:27:44 crc kubenswrapper[4932]: I0321 10:27:44.703381 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:27:44 crc kubenswrapper[4932]: E0321 10:27:44.704446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:27:46 crc kubenswrapper[4932]: I0321 10:27:46.703296 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:27:46 crc kubenswrapper[4932]: E0321 10:27:46.704234 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:27:52 crc kubenswrapper[4932]: I0321 10:27:52.702278 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:27:52 crc kubenswrapper[4932]: E0321 10:27:52.702939 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:27:56 crc kubenswrapper[4932]: I0321 10:27:56.702632 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:27:56 crc kubenswrapper[4932]: E0321 10:27:56.703332 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:27:57 crc kubenswrapper[4932]: I0321 10:27:57.709454 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:27:57 crc kubenswrapper[4932]: E0321 10:27:57.709734 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.143633 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29568148-6s66n"] Mar 21 10:28:00 crc kubenswrapper[4932]: E0321 10:28:00.144298 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26696c42-bd40-4b07-8946-e9d5c9de291d" containerName="oc" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.144312 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="26696c42-bd40-4b07-8946-e9d5c9de291d" containerName="oc" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.144538 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="26696c42-bd40-4b07-8946-e9d5c9de291d" containerName="oc" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.145422 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568148-6s66n" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.148031 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.148051 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6rhnp" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.148749 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.161850 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568148-6s66n"] Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.258959 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hk2\" (UniqueName: \"kubernetes.io/projected/0bc619a6-2221-44dc-8b72-3b17b5d13e68-kube-api-access-79hk2\") pod \"auto-csr-approver-29568148-6s66n\" (UID: \"0bc619a6-2221-44dc-8b72-3b17b5d13e68\") " pod="openshift-infra/auto-csr-approver-29568148-6s66n" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.361034 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79hk2\" (UniqueName: \"kubernetes.io/projected/0bc619a6-2221-44dc-8b72-3b17b5d13e68-kube-api-access-79hk2\") pod \"auto-csr-approver-29568148-6s66n\" (UID: \"0bc619a6-2221-44dc-8b72-3b17b5d13e68\") " pod="openshift-infra/auto-csr-approver-29568148-6s66n" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.379905 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79hk2\" (UniqueName: \"kubernetes.io/projected/0bc619a6-2221-44dc-8b72-3b17b5d13e68-kube-api-access-79hk2\") pod \"auto-csr-approver-29568148-6s66n\" (UID: \"0bc619a6-2221-44dc-8b72-3b17b5d13e68\") " pod="openshift-infra/auto-csr-approver-29568148-6s66n" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.463871 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568148-6s66n" Mar 21 10:28:00 crc kubenswrapper[4932]: I0321 10:28:00.895115 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29568148-6s66n"] Mar 21 10:28:01 crc kubenswrapper[4932]: I0321 10:28:01.661283 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568148-6s66n" event={"ID":"0bc619a6-2221-44dc-8b72-3b17b5d13e68","Type":"ContainerStarted","Data":"08b1a1b13ee88c4b0c417782b5b94da899c99b7ed8e627a2a797ae0ef288b76c"} Mar 21 10:28:02 crc kubenswrapper[4932]: I0321 10:28:02.672176 4932 generic.go:334] "Generic (PLEG): container finished" podID="0bc619a6-2221-44dc-8b72-3b17b5d13e68" containerID="510599fdca7c1238686a1964e5a5854e67b4d940639b21bde95670267ca09539" exitCode=0 Mar 21 10:28:02 crc kubenswrapper[4932]: I0321 10:28:02.672240 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568148-6s66n" event={"ID":"0bc619a6-2221-44dc-8b72-3b17b5d13e68","Type":"ContainerDied","Data":"510599fdca7c1238686a1964e5a5854e67b4d940639b21bde95670267ca09539"} Mar 21 10:28:04 crc kubenswrapper[4932]: I0321 10:28:04.030000 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568148-6s66n" Mar 21 10:28:04 crc kubenswrapper[4932]: I0321 10:28:04.143412 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79hk2\" (UniqueName: \"kubernetes.io/projected/0bc619a6-2221-44dc-8b72-3b17b5d13e68-kube-api-access-79hk2\") pod \"0bc619a6-2221-44dc-8b72-3b17b5d13e68\" (UID: \"0bc619a6-2221-44dc-8b72-3b17b5d13e68\") " Mar 21 10:28:04 crc kubenswrapper[4932]: I0321 10:28:04.150584 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc619a6-2221-44dc-8b72-3b17b5d13e68-kube-api-access-79hk2" (OuterVolumeSpecName: "kube-api-access-79hk2") pod "0bc619a6-2221-44dc-8b72-3b17b5d13e68" (UID: "0bc619a6-2221-44dc-8b72-3b17b5d13e68"). InnerVolumeSpecName "kube-api-access-79hk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:28:04 crc kubenswrapper[4932]: I0321 10:28:04.246757 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79hk2\" (UniqueName: \"kubernetes.io/projected/0bc619a6-2221-44dc-8b72-3b17b5d13e68-kube-api-access-79hk2\") on node \"crc\" DevicePath \"\"" Mar 21 10:28:04 crc kubenswrapper[4932]: I0321 10:28:04.691889 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29568148-6s66n" event={"ID":"0bc619a6-2221-44dc-8b72-3b17b5d13e68","Type":"ContainerDied","Data":"08b1a1b13ee88c4b0c417782b5b94da899c99b7ed8e627a2a797ae0ef288b76c"} Mar 21 10:28:04 crc kubenswrapper[4932]: I0321 10:28:04.692391 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b1a1b13ee88c4b0c417782b5b94da899c99b7ed8e627a2a797ae0ef288b76c" Mar 21 10:28:04 crc kubenswrapper[4932]: I0321 10:28:04.692529 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29568148-6s66n" Mar 21 10:28:05 crc kubenswrapper[4932]: I0321 10:28:05.105418 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29568142-v92bl"] Mar 21 10:28:05 crc kubenswrapper[4932]: I0321 10:28:05.112434 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29568142-v92bl"] Mar 21 10:28:05 crc kubenswrapper[4932]: I0321 10:28:05.712015 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3a82cd-8c49-4998-98fe-3d6511c83ce5" path="/var/lib/kubelet/pods/6e3a82cd-8c49-4998-98fe-3d6511c83ce5/volumes" Mar 21 10:28:06 crc kubenswrapper[4932]: I0321 10:28:06.703428 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:28:06 crc kubenswrapper[4932]: E0321 10:28:06.704110 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:28:08 crc kubenswrapper[4932]: I0321 10:28:08.702868 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:28:08 crc kubenswrapper[4932]: I0321 10:28:08.703174 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:28:08 crc kubenswrapper[4932]: E0321 10:28:08.703368 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:28:08 crc kubenswrapper[4932]: E0321 10:28:08.703368 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:28:18 crc kubenswrapper[4932]: I0321 10:28:18.702892 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:28:18 crc kubenswrapper[4932]: E0321 10:28:18.703587 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4n7b_openshift-machine-config-operator(8044dc63-0327-41d4-93fe-af2287271a84)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" podUID="8044dc63-0327-41d4-93fe-af2287271a84" Mar 21 10:28:19 crc kubenswrapper[4932]: I0321 10:28:19.702627 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:28:19 crc kubenswrapper[4932]: E0321 10:28:19.702904 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:28:22 crc kubenswrapper[4932]: I0321 10:28:22.702968 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:28:22 crc kubenswrapper[4932]: E0321 10:28:22.703616 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:28:31 crc kubenswrapper[4932]: I0321 10:28:31.702303 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:28:31 crc kubenswrapper[4932]: E0321 10:28:31.703139 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:28:31 crc kubenswrapper[4932]: I0321 10:28:31.703398 4932 scope.go:117] "RemoveContainer" containerID="f3a4ec0b42968586bc03deac7b73ff83d9ed3e691373fb0ee66d9c858c5ddac8" Mar 21 10:28:32 crc kubenswrapper[4932]: I0321 10:28:32.932133 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4n7b" event={"ID":"8044dc63-0327-41d4-93fe-af2287271a84","Type":"ContainerStarted","Data":"dd3049aa73c720ff76c325a64b00be69f0aca5ba337f13ce9c1f94bd20e6065a"} Mar 21 10:28:34 crc kubenswrapper[4932]: I0321 10:28:34.894051 4932 scope.go:117] "RemoveContainer" containerID="41b50031c1f5723bca263865ed647994fc81623a532c85364cdd8a20f7c10249" Mar 21 10:28:36 crc kubenswrapper[4932]: I0321 10:28:36.703088 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:28:36 crc kubenswrapper[4932]: E0321 10:28:36.704003 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:28:46 crc kubenswrapper[4932]: I0321 10:28:46.702560 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:28:46 crc kubenswrapper[4932]: E0321 10:28:46.703516 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.019590 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-745n2"] Mar 21 10:28:47 crc kubenswrapper[4932]: E0321 10:28:47.020075 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc619a6-2221-44dc-8b72-3b17b5d13e68" containerName="oc" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.020097 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc619a6-2221-44dc-8b72-3b17b5d13e68" containerName="oc" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.020341 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc619a6-2221-44dc-8b72-3b17b5d13e68" containerName="oc" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.022139 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.031173 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-745n2"] Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.161205 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-utilities\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.161407 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hzc2\" (UniqueName: \"kubernetes.io/projected/24f63402-7d15-4e29-a969-a7b056362c61-kube-api-access-6hzc2\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.161482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-catalog-content\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.263225 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-utilities\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.263310 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hzc2\" (UniqueName: \"kubernetes.io/projected/24f63402-7d15-4e29-a969-a7b056362c61-kube-api-access-6hzc2\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.263364 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-catalog-content\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.263971 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-utilities\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.264001 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-catalog-content\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.283918 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hzc2\" (UniqueName: \"kubernetes.io/projected/24f63402-7d15-4e29-a969-a7b056362c61-kube-api-access-6hzc2\") pod \"community-operators-745n2\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.348819 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:47 crc kubenswrapper[4932]: I0321 10:28:47.834514 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-745n2"] Mar 21 10:28:47 crc kubenswrapper[4932]: W0321 10:28:47.844266 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f63402_7d15_4e29_a969_a7b056362c61.slice/crio-4017a543f135444ce3a8919104ae6c4f0fb1a959611a6e397802dd863ea83094 WatchSource:0}: Error finding container 4017a543f135444ce3a8919104ae6c4f0fb1a959611a6e397802dd863ea83094: Status 404 returned error can't find the container with id 4017a543f135444ce3a8919104ae6c4f0fb1a959611a6e397802dd863ea83094 Mar 21 10:28:48 crc kubenswrapper[4932]: I0321 10:28:48.073547 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745n2" event={"ID":"24f63402-7d15-4e29-a969-a7b056362c61","Type":"ContainerStarted","Data":"1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b"} Mar 21 10:28:48 crc kubenswrapper[4932]: I0321 10:28:48.073601 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745n2" event={"ID":"24f63402-7d15-4e29-a969-a7b056362c61","Type":"ContainerStarted","Data":"4017a543f135444ce3a8919104ae6c4f0fb1a959611a6e397802dd863ea83094"} Mar 21 10:28:49 crc kubenswrapper[4932]: I0321 10:28:49.082597 4932 generic.go:334] "Generic (PLEG): container finished" podID="24f63402-7d15-4e29-a969-a7b056362c61" containerID="1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b" exitCode=0 Mar 21 10:28:49 crc kubenswrapper[4932]: I0321 10:28:49.082681 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745n2" event={"ID":"24f63402-7d15-4e29-a969-a7b056362c61","Type":"ContainerDied","Data":"1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b"} Mar 21 10:28:49 crc kubenswrapper[4932]: I0321 10:28:49.703011 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:28:50 crc kubenswrapper[4932]: I0321 10:28:50.098285 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745n2" event={"ID":"24f63402-7d15-4e29-a969-a7b056362c61","Type":"ContainerStarted","Data":"f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf"} Mar 21 10:28:50 crc kubenswrapper[4932]: I0321 10:28:50.102754 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerStarted","Data":"c7fad1c8ddf573c7c12fd13fb14af07f478fad89d422c35fa32df2be2f09a79c"} Mar 21 10:28:52 crc kubenswrapper[4932]: I0321 10:28:52.118941 4932 generic.go:334] "Generic (PLEG): container finished" podID="24f63402-7d15-4e29-a969-a7b056362c61" containerID="f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf" exitCode=0 Mar 21 10:28:52 crc kubenswrapper[4932]: I0321 10:28:52.119006 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745n2" event={"ID":"24f63402-7d15-4e29-a969-a7b056362c61","Type":"ContainerDied","Data":"f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf"} Mar 21 10:28:53 crc kubenswrapper[4932]: I0321 10:28:53.136717 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745n2" event={"ID":"24f63402-7d15-4e29-a969-a7b056362c61","Type":"ContainerStarted","Data":"e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3"} Mar 21 10:28:53 crc kubenswrapper[4932]: I0321 10:28:53.158004 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-745n2" podStartSLOduration=3.624137123 podStartE2EDuration="7.157985279s" podCreationTimestamp="2026-03-21 10:28:46 +0000 UTC" firstStartedPulling="2026-03-21 10:28:49.085202018 +0000 UTC m=+5432.680400287" lastFinishedPulling="2026-03-21 10:28:52.619050174 +0000 UTC m=+5436.214248443" observedRunningTime="2026-03-21 10:28:53.155811431 +0000 UTC m=+5436.751009710" watchObservedRunningTime="2026-03-21 10:28:53.157985279 +0000 UTC m=+5436.753183558" Mar 21 10:28:57 crc kubenswrapper[4932]: I0321 10:28:57.349172 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:57 crc kubenswrapper[4932]: I0321 10:28:57.351139 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:57 crc kubenswrapper[4932]: I0321 10:28:57.396756 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:57 crc kubenswrapper[4932]: I0321 10:28:57.741124 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:28:57 crc kubenswrapper[4932]: I0321 10:28:57.741450 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:28:58 crc kubenswrapper[4932]: I0321 10:28:58.179989 4932 generic.go:334] "Generic (PLEG): container finished" podID="13285608-51c1-4307-a442-e0cd0e881385" containerID="c7fad1c8ddf573c7c12fd13fb14af07f478fad89d422c35fa32df2be2f09a79c" exitCode=1 Mar 21 10:28:58 crc kubenswrapper[4932]: I0321 10:28:58.180062 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf6fd964-2w7xj" event={"ID":"13285608-51c1-4307-a442-e0cd0e881385","Type":"ContainerDied","Data":"c7fad1c8ddf573c7c12fd13fb14af07f478fad89d422c35fa32df2be2f09a79c"} Mar 21 10:28:58 crc kubenswrapper[4932]: I0321 10:28:58.180103 4932 scope.go:117] "RemoveContainer" containerID="1aca119812b90d0923a3f44807aba146b90d4aa51aa1231ab7831d251a0cc166" Mar 21 10:28:58 crc kubenswrapper[4932]: I0321 10:28:58.180807 4932 scope.go:117] "RemoveContainer" containerID="c7fad1c8ddf573c7c12fd13fb14af07f478fad89d422c35fa32df2be2f09a79c" Mar 21 10:28:58 crc kubenswrapper[4932]: E0321 10:28:58.181074 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:28:58 crc kubenswrapper[4932]: I0321 10:28:58.226374 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-745n2" Mar 21 10:28:58 crc kubenswrapper[4932]: I0321 10:28:58.275421 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-745n2"] Mar 21 10:28:58 crc kubenswrapper[4932]: I0321 10:28:58.702711 4932 scope.go:117] "RemoveContainer" containerID="8cdf6da991ee2a54205051b1d4b937c9be7fda46584316dbd4706034f0f911ff" Mar 21 10:28:58 crc kubenswrapper[4932]: E0321 10:28:58.703512 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-7998c44c8d-kb65g_openstack(a2137f88-2dc2-4718-bd8d-229745974b9a)\"" pod="openstack/horizon-7998c44c8d-kb65g" podUID="a2137f88-2dc2-4718-bd8d-229745974b9a" Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.202259 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-745n2" podUID="24f63402-7d15-4e29-a969-a7b056362c61" containerName="registry-server" containerID="cri-o://e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3" gracePeriod=2 Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.735401 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745n2" Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.870220 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-utilities\") pod \"24f63402-7d15-4e29-a969-a7b056362c61\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.870335 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hzc2\" (UniqueName: \"kubernetes.io/projected/24f63402-7d15-4e29-a969-a7b056362c61-kube-api-access-6hzc2\") pod \"24f63402-7d15-4e29-a969-a7b056362c61\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.871253 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-catalog-content\") pod \"24f63402-7d15-4e29-a969-a7b056362c61\" (UID: \"24f63402-7d15-4e29-a969-a7b056362c61\") " Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.871046 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-utilities" (OuterVolumeSpecName: "utilities") pod "24f63402-7d15-4e29-a969-a7b056362c61" (UID: "24f63402-7d15-4e29-a969-a7b056362c61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.871754 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.878385 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f63402-7d15-4e29-a969-a7b056362c61-kube-api-access-6hzc2" (OuterVolumeSpecName: "kube-api-access-6hzc2") pod "24f63402-7d15-4e29-a969-a7b056362c61" (UID: "24f63402-7d15-4e29-a969-a7b056362c61"). InnerVolumeSpecName "kube-api-access-6hzc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.936615 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24f63402-7d15-4e29-a969-a7b056362c61" (UID: "24f63402-7d15-4e29-a969-a7b056362c61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.973089 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hzc2\" (UniqueName: \"kubernetes.io/projected/24f63402-7d15-4e29-a969-a7b056362c61-kube-api-access-6hzc2\") on node \"crc\" DevicePath \"\"" Mar 21 10:29:00 crc kubenswrapper[4932]: I0321 10:29:00.973125 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f63402-7d15-4e29-a969-a7b056362c61-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.213892 4932 generic.go:334] "Generic (PLEG): container finished" podID="24f63402-7d15-4e29-a969-a7b056362c61" containerID="e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3" exitCode=0 Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.213931 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745n2" event={"ID":"24f63402-7d15-4e29-a969-a7b056362c61","Type":"ContainerDied","Data":"e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3"} Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.213955 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745n2" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.213968 4932 scope.go:117] "RemoveContainer" containerID="e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.213958 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745n2" event={"ID":"24f63402-7d15-4e29-a969-a7b056362c61","Type":"ContainerDied","Data":"4017a543f135444ce3a8919104ae6c4f0fb1a959611a6e397802dd863ea83094"} Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.246713 4932 scope.go:117] "RemoveContainer" containerID="f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.248972 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-745n2"] Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.256853 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-745n2"] Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.269536 4932 scope.go:117] "RemoveContainer" containerID="1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.321654 4932 scope.go:117] "RemoveContainer" containerID="e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3" Mar 21 10:29:01 crc kubenswrapper[4932]: E0321 10:29:01.322148 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3\": container with ID starting with e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3 not found: ID does not exist" containerID="e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.322203 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3"} err="failed to get container status \"e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3\": rpc error: code = NotFound desc = could not find container \"e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3\": container with ID starting with e15a4352d5e7cfe7118d313b184a152b214f13d3ed052920dea47f0d227387c3 not found: ID does not exist" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.322235 4932 scope.go:117] "RemoveContainer" containerID="f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf" Mar 21 10:29:01 crc kubenswrapper[4932]: E0321 10:29:01.322831 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf\": container with ID starting with f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf not found: ID does not exist" containerID="f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.322861 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf"} err="failed to get container status \"f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf\": rpc error: code = NotFound desc = could not find container \"f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf\": container with ID starting with f7839f0c7eba642eadad04ab828737bedd022be6cff6bc950abad31bf23eabcf not found: ID does not exist" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.322881 4932 scope.go:117] "RemoveContainer" containerID="1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b" Mar 21 10:29:01 crc kubenswrapper[4932]: E0321 10:29:01.323298 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b\": container with ID starting with 1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b not found: ID does not exist" containerID="1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.323327 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b"} err="failed to get container status \"1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b\": rpc error: code = NotFound desc = could not find container \"1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b\": container with ID starting with 1357af7212a6172b4648e0f764f4d88596233c5b1960d656a2cff59569d1131b not found: ID does not exist" Mar 21 10:29:01 crc kubenswrapper[4932]: I0321 10:29:01.715263 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f63402-7d15-4e29-a969-a7b056362c61" path="/var/lib/kubelet/pods/24f63402-7d15-4e29-a969-a7b056362c61/volumes" Mar 21 10:29:07 crc kubenswrapper[4932]: I0321 10:29:07.741202 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:29:07 crc kubenswrapper[4932]: I0321 10:29:07.743531 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf6fd964-2w7xj" Mar 21 10:29:07 crc kubenswrapper[4932]: I0321 10:29:07.744485 4932 scope.go:117] "RemoveContainer" containerID="c7fad1c8ddf573c7c12fd13fb14af07f478fad89d422c35fa32df2be2f09a79c" Mar 21 10:29:07 crc kubenswrapper[4932]: E0321 10:29:07.744730 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" Mar 21 10:29:08 crc kubenswrapper[4932]: I0321 10:29:08.288770 4932 scope.go:117] "RemoveContainer" containerID="c7fad1c8ddf573c7c12fd13fb14af07f478fad89d422c35fa32df2be2f09a79c" Mar 21 10:29:08 crc kubenswrapper[4932]: E0321 10:29:08.289414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-fbf6fd964-2w7xj_openstack(13285608-51c1-4307-a442-e0cd0e881385)\"" pod="openstack/horizon-fbf6fd964-2w7xj" podUID="13285608-51c1-4307-a442-e0cd0e881385" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515157471377024466 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015157471377017403 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015157456270016521 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015157456270015471 5ustar corecore